Vous êtes sur la page 1sur 65

5

Quantication of Innitely Thick Specimens


by XRF Analysis

Johan L. de Vries*
Eindhoven, The Netherlands
Bruno A. R. Vrebos
Philips Analytical, Almelo, The Netherlands

I. INTRODUCTION

Quantitative x-ray uorescence (XRF) analysis involves the conversion of measured


uorescent intensities to the concentration of the analytes. In most of the literature on this
subject, the measured intensities at this stage of the analytical procedure are assumed to be
corrected for background and line overlap. The same assumptions will be made here,
although some comments will be made on these topics throughout this chapter. Fur-
thermore, energy-dispersive and wavelength-dispersive x-ray spectrometers dier sig-
nicantly with respect to the counting channel and the resolution or resolving power that
can be achieved. This has a direct inuence on, for example, the line overlap correction
procedures that need to be applied (if any) to obtain net intensities. Wavelength-dispersive
x-ray uorescence spectrometry has been discussed in Chapter 2; energy-dispersive systems
are the subject of Chapters 3 (for x-ray tube excitation) and 7 (for radioisotope excitation).
The software provided by the vendors of x-ray spectrometry systems for their customers
compensates to some extent for the peculiarities of each design and considerable dier-
ences in the software packages oered with the instruments are to be expected. It is
therefore dicult to give guidelines that are generally applicable.
In some cases, most notably those where the intensity of the background is essen-
tially constant (between specimens), the correction for background can be neglected. This
is usually the case when the dierence in terms of composition of the standard specimens
and the unknowns is rather small [i.e., the concentration ranges (not necessarily the
concentration levels) of all components are rather small]. This condition of similarity
between specimens is sucient to ensure constant background, but it is not a requirement,
as, in certain cases, the intensity of the background is (nearly) constant even when the
concentration range of one or more analytes is quite large. Note that the only requirement

*Retired.

Copyright 2002 Marcel Dekker, Inc.


here is for the background to be constant; at this point, no restrictions on the magnitude of
the background have been imposed. Under these circumstances, correction for back-
ground is not required and the gross count rate (or intensity) can be used with equal
success as the net intensity for the determination of the specimens composition.
The situation with line overlap is somewhat more complicated, as it is almost never
constant. This is due to the intensity of the overlapping line(s): It is very rarely a constant
for dierent specimens. There are several methods to correct for line overlap, and the
preference of the individual analyst as well as the type of spectrometer and its associated
software (if any) are some of the factors inuencing the nal choice.
The basic principles of quantitative analysis have not changed much since the early
years of x-ray uorescence analysis, especially for the wavelength-dispersive systems.
Energy-dispersive systems have seen more changes due to, for example, development of
dierent solid-state detectors and primary optics, such as capillaries (see Chapter 11), and
the increasingly more easily available computing power which allowed for more com-
prehensive spectral treatment programs at the ngertips of the analysts (see also
Chapter 4). In general, the method has always involved some calibration step, during
which the intensities of selected elements in a suite of standard specimens are measured
according to a scheme or a recipe that has been optimized earlier. These intensities are
combined with the composition of the standard specimens, and the calibration curves for
the elements of interest are then constructed. These calibration curves might include a
correction for matrix eects of some sort. The resulting calibrations could then be used for
the analysis of unknown specimens with compositions similar to the standards. This
method is rather time-consuming, but it yields, in general, the highest possible analytical
accuracy. Also, approximate knowledge of the composition of the specimens prior to the
analysis is assumed. Such assumptions can be made very implicitly (e.g., when selecting the
calibration curve to be used for analysis). If the method is routinely used and the number
of unknown specimens to be analyzed is rather large, the benets of this instrumental
technique become very clear, when compared to classical wet chemistry methods, which
tend to be much more time-consuming.
Since the second half of the 1980s, interest has been growing for a dierent kind of
quantitative analysis on wavelength-dispersive spectrometers, in which the elements pre-
sent in the specimens and their approximate concentration levels are not known prior to
the analysis. Semiquantitative analysis consists of an intimate combination of qualitative
and quantitative analyses. During the qualitative analysis, the presence of the elements
present in the specimen is established. This can be done in a variety of ways. Qualitative
analysis is based on the collection of a spectrum of the specimen, followed by peak search
and peak identication. Alternatively, the intensities of a rather large and xed set of
characteristic lines are collected. The lines are selected by the manufacturer and allow for
the determination of most of the commonly occurring elements. In both cases, the net
intensities are then determined and a quantication is made, involving matrix correction
and general calibration lines for each of the elements. These calibration lines have been
determined earlier from a set of standard specimens, although some manufacturers insist
on calling their method standardless. The packages that are available commercially
dier on the basis of the qualitative analysis (accumulating a spectrum or measurement on
xed wavelength positions), the number of standard specimens used, the exibility of the
spectrometer conguration that they can cope with, and the method used to correct for
matrix eect.
The focus of energy-dispersive systems has always been somewhat more qualitative
and more oriented toward research, exibility, and the analysis of small batches of
Copyright 2002 Marcel Dekker, Inc.
unknown specimens. The software allowed for quantitative analysis without the need to
have standards similar to the unknowns. These systems are thus more exible in their use
in that respect. This does not mean that such systems are unsuitable for routine analysis in
industry; many of them actually are used for that purpose.
In this chapter, several methods for the quantication of innitely thick specimens
are discussed. For all methods described here, it is assumed that the specimens are
innitely thick: The intensity of the radiation from the specimen is constant and is not
aected by increasing the thickness of the specimen. The analysis of specimens with less-
than-innite thickness is discussed in Chapter 6. Also, the specimens are assumed to be
homogeneous: Their composition is the same throughout the specimen. This assumption is
valid for a large range of material types found in many applications. The most important
exception are specimens with one or more thin coatings.

II. CORRELATION BETWEEN COUNT RATE AND SPECIMEN COMPOSITION


A. Introduction
In quantitative analysis, the measured x-ray uorescent intensity of a given element is
converted into its weight concentration in the specimen. As a rst approximation, one
would expect a linear relationship. Each atom of the analyte element i has the same
probability of being excited by the primary photons and emitting its characteristic photons
with wavelength li. Indeed, if separate atoms or ions (e.g., in a gas or in a very diluted
solution) are considered, the following relation holds:
I i K i Wi 1
where Ii is the measured intensity of the uorescent radiation of the analyte i and Wi is the
weight fraction of the analyte i in the specimen.
The proportionality constant Ki consists of many physical and instrumental factors
such as the following:
The intensity and the distribution of wavelengths of the photons in the primary beam
The probability that an atom i emits its characteristic radiation li
The probability that these photons li pass through the measuring channel:
collimators, diracting crystal, pulse-height window
The probability that these photons are being detected and registered.
For a given instrument, the voltage and power on the x-ray tube remain constant for the
analyte i and they can be determined by measuring the uorescent intensity of the pure
element i. However, we are generally dealing with compact specimens where the atoms are
arranged into chemical compounds. Both the primary x-rays and the uorescent x-rays
will be absorbed by the dierent atoms in the specimen.

B. General Relationship Between Intensity and Concentration


1. Primary Fluorescence by Monochromatic Radiation
First, excitation by monochromatic radiation will be considered. Let the intensity of the
incident beam with wavelength l0 at the surface of the specimen be given by I0(l0). This
beam strikes the surface of the specimen at an angle c0 (see also Fig. 1). A parallel beam is
assumed here and the specimen is considered to be extending to innity in all three
dimensions. The incident radiation is gradually absorbed by the specimen, and at a layer at
Copyright 2002 Marcel Dekker, Inc.
Figure 1 Schematic representation of the geometry involved in the calculation of primary
fluorescence emission.

depth t below the surface, the remaining fraction of the intensity It(l0) is given by the
LambertBeer law:
It l0 I0 l0 expms l0 rs t csc c0 2
ms(l0) is the mass-attenuation coecient of the specimen for photons with wavelength l0
(the subscript s refers to the specimen) and rs is the density of the specimen. Note that due
to the angle of incidence, the path length traveled is given by t csc c0 . The mass-attenuation
coecient ms(l0) in this equation is thus calculated for the specimen; this is done by adding
the mass-attenuation coecients for all elements j present in the specimen, each multiplied
with its mass fraction Wj:
X
n
ms l0 mj l0 Wj 3
j1

where n is the total number of elements present in the specimen. The fraction of the in-
cident beam absorbed by the analyte i in the layer between t and (t dt) is given by
Wi mi l0 rs csc c0 dt 4
It is assumed here that the composition of the specimen is uniform throughout. In other
words, Wi and Wj are independent of the position of the layer (t, t dt) within the
specimen.
Only a fraction of the photons absorbed creates vacancies in the K shell; this fraction
is given by (riK 1)=riK, where riK is the absorption jump ratio of the K shell of element i.
The fraction of K vacancies emitting x-rays is given by the uorescence yield oiK. The
fraction of Ka photons in the total of x-rays emitted for the analyte is given by the
transition probability fiKa. These factors can be combined in a factor designated Qi(l0, li):
riK  1
Qi l0 ; li oi fiKa 5
riK
In the above derivation, it is assumed that the characteristic line of interest is a Ka. If
another line is considered, the relevant changes need to be made.
The characteristic photons thus generated are isotropically emitted in all directions,
without a preferential direction. Only a fraction is emitted toward the detector. If O is the
solid angle as viewed by the collimator detector system, expressed in steradians, then the
fraction is given by O=4p. The angle O should be small enough so that the beam can be
considered to be a parallel beam leaving the specimen at a single well-dened angle c00
with the surface. The fraction of characteristic photons with wavelength li not absorbed
between the layer (t, t dt) and the surface is given by
Copyright 2002 Marcel Dekker, Inc.
expms li rs t csc c00 6
All photons reaching the surface and propagating in the direction indicated by the
angle c00 are assumed to be detected. If the detector has absorbing elements (such as
windows) or detection eciencies dierent from unity, then these can also be taken into
account. Also, the attenuation by the medium (e.g., air) between specimen and detector
can be calculated using similar expressions.
The intensity of element i, as excited by the incident beam with wavelength l0, is
labeled Pi(l0) (explicitly denoting the primary uorescence eect) and is given by
Pi l0 I0 l0 expms l0 rs t csc c0
O 7
 mi l0 Wi Qi l0 ; li rs cscc0 dt expms li rs t csc c00
4p
Combining factors leads to
O
Pi l0 I0 l0 mi l0 Wi Qi l0 ; li csc c0
4p
 expms l0 csc c0 ms li csc c00 rs t rs dt 8
The contribution of all layers between the surface and the bottom of the specimen have to
be summed. This can be done by integrating the above expression over dt, from 0 (the surface)
to the bottom. In practice, for x-rays, the thickness of bulk specimens can be considered to
be innite, so the upper limit is 1 . Note that t is always used in combination with rs, so the
integration will be done over rst. Taking all constant factors outside the integral, one obtains
O
Pi l0 I0 l0 mi l0 Wi csc c0 Qi l0 ; li
4p
Z1
 expms l0 csc c0 ms li csc c00 rs t rs dt 9
0

From a textbook on calculus,


Z1
1
expax dx 10
a
0

Taking a (ms(l0) csc c0 ms(li) csc c00 ) and noting that rs dt d(rst), the following
expression is obtained for the primary uorescence of the analyte i in the specimen s:
I0 l0 mi l0 Wi csc c0 Qi l0 ; li O=4p
Pi l0 11
ms l0 csc c0 ms li csc c00
Often, the element specic factors, given by Eq. (5), are combined with the instrument
specic factor O=4p. This leads to a simpler expression for the primary uorescence:
mi l0
Pi l0 Ki I0 l0 Wi 12
ms l0 Gms li
where Ki O=4pQi l0 ; li and G csc c00 =csc c0 sin c0 = sin c00 .
In the above derivation, the following assumptions have been made:
(a) The specimen is completely homogeneous.
(b) The specimen extends to innity in three dimensions.
Copyright 2002 Marcel Dekker, Inc.
(c) The primary rays are not scattered on their way to the layer dt.
(d) No enhancement eects occur.
(e) The characteristic radiation is not scattered on its way to the specimen surface.
The simplest case is excitation of Ka (or Kb) photons. For the characteristic photons
associated with the L lines, other eects, such as CosterKronig transitions and so forth
have to be taken into account when describing the fraction of absorbed primary photons
that give rise to characteristic photons. For a more detailed discussion on these eects,
refer to Chapter 1.

2. Secondary Fluorescence Excited by Monochromatic Radiation


Under certain conditions, the characteristic photons of an element j can excite atoms of
the analyte i. This will lead to additional characteristic photons of element i. The term
additional is used here in the sense that these photons are not considered in the above
derivation for the primary uorescence and, consequently, are not predicted by Eq. (12). It
is beyond the scope of this chapter to derive the mathematical expressions following a
rigorous approach, but certain aspects of secondary uorescence emission are readily seen
from the above derivation and from Figure 2. In Figure 2, two layers are now considered.
The layer on the left, indicated by j, at a depth of t1 with a thickness of dt1, is where the
primary uorescence of element j is excited. This is described by Eqs. (2)(5). Obviously,
the element considered at this stage is the enhancing element j, so the fundamental
parameters for j need to be used instead of these for i in Eqs. (2)(5). Once the uorescent
radiation is created, it will be absorbed on its way through the specimen. The precise
direction is of no concern, as we are now no longer considering the fraction that travels
toward the detector. The attenuation along its path is described by equations such as
Eq. (2) or Eq. (6). Keep in mind that the angles and the distances involved are now dif-
ferent and that the wavelength l for which the mass attenuation needs to be calculated
[Eq. (3)] is now lj instead of l0. At the second layer, indicated by i, at a depth of t2 with a
thickness of dt2, part of this (primary uorescence) radiation will be absorbed. Again, this
eect is described by an equation such as Eq. (4), with the angle c0 replaced by the angle of
this path. The factors to be applied in order to lead to characteristic radiation of the
analyte i are given by Qi(lj, li); Eq. (6) describes the attenuation (wavelength is now li) on
the way toward the detector. These factors can easily be recalculated. The complication is
in the description of the geometry, especially in the description of the distance betwee the
two layers: The position of the two layers relative to one another is not restricted, the path
of the photon lj does not have to be in the plane of the gure, nor is the second layer

Figure 2 Schematic representation of the geometry involved in the calculation of secondary


fluorescence emission.

Copyright 2002 Marcel Dekker, Inc.


always at a larger depth than the rst layer. The angle X 0 in Figure 2 can take any value
between 0 and 2p, and the three paths shown are not necessarily in one plane. This aspect
has been discussed by Li-Xing (1984).
Anyway, after integration, the nal result Sij l0 ; lj is given by
I0 l0 csc c0 mj l0 Wj Qj l0 ; lj mi lj Wi Qi lj ; li O=4p
Sij l0 ; lj
2ms l0 csc c0 ms li csc c00
    
sin c0 ms l0 sin c00 ms lj
 ln 1 ln 1 13
ms l0 ms lj sin c0 ms lj ms lj sin c00
where Sij l0 ; lj describes the secondary uorescence (enhancement) of the analyte i by
characteristic photons with wavelength lj; these photons have been excited by primary
photons with wavelength l0.
The photons to be considered for the enhancement of the analyte are not limited to
Ka only, as the Kb lines and other characteristic lines that have sucient energy to excite
the shell of interest of the analyte i must also be taken into account. For this reason, all
individual contributions need to be added in order to calculate the total enhancement of
the analyte i by element j:
X
Sij l0 Sij l0 ; lj 14
i

There are two criteria to be satised for a characteristic photon lj to be able to cause
secondary uorescence:
1. It must be excited by the incident photon l0 (this means that the energy of the
incident photon must be higher than the energy of the absorption edge
associated with lj).
2. The energy of the photon lj must be higher than the energy of the absorption
edge for the analyte.
The derivation of the intensities of characteristic x-rays as a function of specimen
composition has rst been done by Sherman (1955). The resulting equations, however,
were rather unwieldy. Equations (9) and (13) have been published later by Shiraiwa and
Fujino (1966) and Sparks (1976).
In practice, many computer programs only consider a few lines for enhancement,
and lines that have a low transition probability f are usually neglected. Data regarding
these transition probabilities for a line within a series can be found, for example, in
Appendix II of Chapter 1. The enhancement phenomenon will be more pronounced if the
x-rays of the enhancing elements are only slightly more energetic than the energy of the
absorption edge of the element i. The enhancement may contribute up to 4050% of
the total uorescent radiation Ii , especially where the concentration of the enhancing
elements is much greater than the concentration of the analyte. This applies even more for
the light elements, where the primary spectrum may not be very eective as the most
intense wavelengths are far away from the absorption edges of these light elements.
The eect of scattered radiation is generally ignored, although its contribution can
also be calculated (Pollai et al., 1971).
If the characteristic line of the analyte considered is one of its L lines, then it is
possible that other lines of the analyte itself enhance the characteristic line considered. The
energy of the K edge of La (atomic number Z 57) is 38.9 keV (see Chapter 1), so the K
lines of all elements with Z < 57 are excited if, for example, an x-ray tube is used at
Copyright 2002 Marcel Dekker, Inc.
Table 1 Data for Selected Absorption Edges and Characteristic Line of Pb

Absorption edge Energy (keV) Characteristic line Energy (keV)

K 88.04
L1 15.86 La1(L3M5) 10.55
L2 15.20 Lb1(L2M4) 12.61
L3 13.04 Lg1(L2N4) 14.76
Source: Appendix I and Appendix II of Chapter 1 (this volume).

voltages above 40 keV, as is commonly the case with wavelength-dispersive spectrometers.


Under these conditions, the L lines of these elements are enhanced by the K lines. The
situation for elements with high atomic numbers is more complex. Let us consider Pb
(Z 82). Lead is very often determined through its La1 or Lb1 line because the K lines of
Pb are too energetic to be measured with reasonable eciency by scintillation or Si(Li)
detectors. Also, excitation voltages higher than 88 keV are required in order to excite the K
lines. Thus, one would not readily expect enhancement by Pb on its own L lines. From the
data in Table 1, however, it follows that the PbLa1 line (L3 M5 ) is enhanced by PbLg1
(L2 N4 ) because its energy (14.76 keV) is higher than the energy of the L3 edge (13.04 keV).
The total excited characteristic intensity is then given by adding the primary and the
secondary uorescence contributions:
X
Ii l0 Pi l0 Sij l0 ; lj 15
j

Also, tertiary uorescence is possible, where the incident photon excites element k (pri-
mary uorescence), whose radiation excites element j (secondary uorescence), whose
radiation, in turn, excites element i, causing tertiary uorescence. This contribution is
generally lower than 3% of the total uorescence and is commonly ignored, as shown by
Shiraiwa and Fujino (1967, 1974).
3. Excitation by Continuous Spectra
If the incident beam is polychromatic rather than monochromatic, Eq. (15) needs to be
calculated for each wavelength. Wavelengths longer than the wavelength of the absorption
edge ledge,i of the analyte cannot excite uorescence, so these need not to be considered. If
Jl is the function representing the tube spectrum, then Eqs. (11) and (13) can still be
used, provided that I0 l0 is replaced by Jl, representing the intensity of the incident
spectrum at wavelength l:
lZedge; i lZedge; i " #
X
Ii Ii Jl dl Pi l Sij l; lj dl 16
j
lmin lmin

This equation allows the calculation of the intensity of characteristic radiation of a given
analyte in an innitely thick specimen. Equations such as Eqs. (11), (14), and (16) are often
referred to as fundamental parameter equations because they allow the calculation of the
intensity of uorescent radiation as a function of the composition of the specimen (weight
fractions Wi ), the incident spectrum [Jl], and the conguration of the spectrometer used
(c0 ; c00 , and O). All other variables used are fundamental constants, such as the mass-at-
tenuation coecients for a given element at a given wavelength or its uorescence yield
and so forth.
Copyright 2002 Marcel Dekker, Inc.
C. Some Observations
The integration over t in Eq. (9) is taken from zero to innity. It is obvious that the rst
layers contribute more to the intensity of li than the more inward layers. Theoretically,
even at large values of t, a very minor contribution to the intensity is still to be expected.
Often, the (minimum) innite depth is dened arbitrarily as that thickness t where the
contribution of the layer t; t dt is 0.01% of that of the surface layer. In this case, it is
dened relative to the surface layer. Alternatively, it is dened as the thickness where the
contribution to the total intensity is less than 1%. In this case, it is relative to the total
intensity from a truly innitely thick specimen. The value of the innite thickness de-
pends on the value of the absorption coecients and the density of the specimen. In
practice, it may vary from a few micrometers for heavy matrices and long wavelengths to
centimeters for short wavelengths and light matrices, as in solutions.
For a given element and a xed geometry, an eciency factor Cl0 ; li can be in-
troduced:
mi l0
Cl0 ; li Pn Pn 17
j1 Wj mj l0 G j1 Wj mj li
P
It is obvious that the terms Wj mj are the origin of nonlinear calibration lines, as
variations in Wj inuence the value of the denominator. For a pure metal, this reduces to
mi l0
Cl0 ; li 18
mi l0 Gmi li
In the general case, the wavelengths in the primary spectrum close to the absorption
edge are the most eective in exciting the analyte i. The eciency factor Cl0 ; li is thus a
combination of the absorption curve of analyte i as a function of l and the spectral dis-
tribution. In a rst approximation, an eective wavelength le can be introduced, which
has the same eect of excitation of element i as the total primary spectrum. The exact value
of this le will be inuenced by the characteristic tube lines if they are active in exciting i.
Otherwise, le can, in general, be assumed to have a value of approximately two-thirds of
the absorption edge ledge. Its actual value is, however, dependent on the chemical com-
position of the specimen. For instance, for Fe, the wavelength of the K edge ledge is
0.174 nm (7.11 keV) and an eective wavelength le of 0.116 nm is obtained using this rule
of thumb. In the ZnOFe2O3 system, le was found to vary from 0.130 nm for 100% Fe2O3
to 0.119 nm for 10% Fe2O3 in ZnO. The estimated value of 0.116 nm is in good agreement
with the experimental one for Fe2O3.
Another interesting case is the analysis of a heavy element i in a light matrix. In
the summation in the denominator of Eq. (12), the terms Wi mi l0 and Wi mi li are
then the most important. The other terms can, in rst approximation, be neglected if
Wi is not small. However, that means that the terms Wi in the numerator and the
denominator cancel and the measured intensity becomes independent of Wi ; thus, the
analysis becomes impossible in this extreme case. A solution to this problem is found
by making the inuence of the terms Wi mi l in the denominator less dominating by
adding a large term Wa ma l. This can be done be making Wa large (e.g., diluting) or
ma l large, by adding heavy absorber. Equation (17) enables one to calculate before-
hand how large this term should be to eliminate uctuations in the concentration of the
other elements.
In the derivation of the uorescence of the analyte i in the preceding paragraphs, the
following simplications were made:
Copyright 2002 Marcel Dekker, Inc.
1. First, it was assumed that the primary rays follow a linear path to the layer dt at
depth t. However, the primary rays may also be scattered. In general, the loss in
intensity of the primary beam of photons due to scattering may be neglected.
These scattering eects become more important when the primary x-rays are
more energetic and the average atomic number of the matrix decreases. This
scatter may give a higher background in the secondary spectrum, thus leading to
a poorer precision of the analysis. On the other hand, the excitation eciency
may be enhanced, as the primary rays dwell longer in the active layers, thus
having a higher probability to encounter atoms of element i. This eect may
overrule the increase in intensity of the background radiation. A case in hand is
the determination of Sn in oils which gives better results using the SnK lines at a
high x-ray tube voltage, than using the SnL lines at moderate voltages.
Incidentally, this scattering of the primary radiation makes it possible to check
the voltage over the x-ray tube. According to Braggs law, the intensity of the
primary spectrum is zero at an angle y0, given by
 
1 nlmin
y0 sin 19
2dcrystal
where 2dcrystal is the 2d spacing of the crystal used and n is an integer number.
lmin (in nm) is given by
1:24
lmin 20
V
where V is the voltage on the x-ray tube, expressed in kilovolts. In practice, a
lower value for V will be found when Compton scattering dominates over
Rayleigh scattering and thus lmin found has a value too low by the Compton
shift, which is about 0.024 nm for most spectrometers; the actual value depends
on the incidence and exit angle (see also Sec. V.C.1.)
2. The integral in Eq. (9) was taken from zero to innity; further, it was assumed
that the specimen is completely homogeneous. This, of course, is never realized
in practice, as we are dealing with discrete atoms in chemical compounds. In
powders, the dierent compounds may have a tendency to cluster. The particles
will, in general, have dierent sizes and shapes. Putting the sample into solution,
either aqueous or solid (melt) may overcome this problem.
It was stated formerly that innite thickness may vary from 20 mm to a few cen-
timeters. However, the most eective layers are much thinner. Thus, the number of dis-
crete particles actually contributing to the uorescent radiation may be rather small.

III. FACTORS INFLUENCING THE ACCURACY OF THE INTENSITY


MEASUREMENT
A. Introduction
The total uncertainty of the analysis consist of many errors whose source may be the
following:
The measurement of the intensity
The reproducibility of the specimen preparation
The conversion of intensity into concentration
Copyright 2002 Marcel Dekker, Inc.
The uncertainty of a single determination may be found from n determinations of the same
analysis, given a mean result xmean, of the all individual results xi. If n is suciently large,
the standard deviation s may be found from
s
Pn 2
i1 xi  xmean
s 21
n1
In this total standard deviation s, random and systematic uncertainties are combined.
Random uncertainties give an indication of the precision of an analysis (the scatter of
results around the mean value), whereas systematic errors are the reason for deviations of
the mean value from the true value. An analysis may thus be precise, but not very
accurate, if systematic errors are present, whereas accurate values could be found from the
mean of widely scattered measurements if only large random uncertainties were present.
The total error of a measurement is composed of all the separate errors. If only random
errors s1, s2, and so forth are considered, the resulting standard deviation is given by
s2 s21 s22 s23    s2n 22
where s1, s2, and so on are the errors associated with, for example, intensity measurements,
specimen preparation, instrumental settings, and so on. In practice, it is often found that s
is dependent on the concentration of the analyte Wi (Johnson, 1967; Hughes and Hurley,
1987):
p
s K Wi Wb 23
where Wb is a small concentration oset (typically 0.001 in weight fraction). Thus, K,
rather than s, becomes an indication of the accuracy of the determination.

B. Random Errors
1. Counting Statistics
If an x-ray measurement consisting of the determination of a number of counts N is re-
peated n times the results N1,N2, N3, . . . ,Nn would spread about the true value N0. If n is
large, the distribution of the measurements would follow a Gaussian distribution,
" #
1 N  N0 2
WN p exp  24
2pN 2N
p
provided N is also large. The standard deviation s of the distribution is equal to Nmean ,
again if n and N are large, where Nmean is the mean of n determinations. From the
properties of the Gaussian distribution, the following hold:
68.3% of all values N will be between N0s and N0 s.
95.4% of all values N will be between N02s and N0 2s.
99.7% of all values N will be between N03s and N0 3s.
p
is a certain probability that the true result N0 will lie between N  N
Similarly,pthere
and N N, assuming the same distribution for N and N0. Measurement results are
commonly expressed as a count rate (intensity per unit time) instead of an intensity,
which gives the number of counts collected in the counting interval. This allows an easier
comparison of measurements made with dierent counting times, but the measuring
time needs to be specied in order to be able to assess the counting statistical error.
Copyright 2002 Marcel Dekker, Inc.
The determined concentration is dependent on the net count rate, which is the peak
count rate Rp minus the background count rate Rb. The total measuring time T equals
tp tb , where tp and tb are the measurement times for peak and background, respec-
tively. In modern equipment, there is no signicant statistical error in the measurement
of t. We can thus assume that R follows the same Gaussian distribution as N with the
same relative standard deviation eN. eN is dened as
sN
eN 25
N
Hence,
p
N 1 1
eN p pp eR 26
N N R t
and
p
R
sR eR R p 27
t
It is obvious that the relative counting error decreases as t increases.
When a net count rate has to be determined the peak, Rp and the background, Rb,
have to be measured; there are thus two independent variables. The standard deviation of
the net intensity, sd, is given by
s
q Rp Rb
sd s2p s2b 28
tp tb

and the relative standard deviation ed by


p
Rp =tp Rb =tb
ed 29
Rp  Rb
When using a sequential wavelength-dispersive x-ray uorescence spectrometer and when
the count rates are rather low and time is limited, it is of interest to divide the total
counting time available over tp and tb in the best possible way. In principle, there are three
methods to split up the total counting time T:
1. Fixed time. Peak and background are measured for the same time;
tp tb T=2. The resulting standard deviation in the dierence between the
peak and the background count rate is then given by
rq
2
sd Rp Rb 30
T
2. Fixed count. The same number of counts, N, is collected on the peak and on the
background: Np Nb or
tp Rb
31
tb Rp
The resulting standard deviation for the dierence is given by
rqs
1 Rp Rb
sd Rp Rb 32
T Rb Rp

Copyright 2002 Marcel Dekker, Inc.


3. Fixed time optimal. The optimum division of the total measurement time T over
tp and tb can be found by dierentiating Eq. (29) with respect to tp, where tb
T  tp. It is found that for optimal results, the ratio of the measuring times is
s
tp Rb
33
tb Rp
The resulting standard deviation is given by
r
1 p p
sd Rp Rb 34
T
and the relative standard deviation by
1 1
ed p p p 35
T Rp  Rb

It can easily be demonstrated that


sFTO  sFT  sFC 36
where sFTO refers to the method of xed time optimized, sFT is for xed time, and sFC is
for xed counts. When Rp is very large compared with Rb, sFTO is close to sFT. Thus, the
xed time method is often used in practice because Rp and Rb are not known beforehand.
If there is a great dierence between peak and background count rate, then usually all the
available time is spend counting the peak. Thus, the background is not measured. A
constant value for the background can be assumed or it can be ignored altogether.
Compared with method of xed time, the following observation can be made: If, Rb is not
measured, the standard deviation in the net count rate (calculated as the dierence
between Rp and Rb) is given by
r
Rp
sd s2b 37
T
pp
This approximation p is allowed if this value is smaller than 2=T Rp Rb . This is the
case if sb 5 Rb =T. This applies if the background is more or less constant between
specimens and a xed value for the background can be deducted. If the background pis
altogether ignored, sb Rb . This is only allowed when Rb is much smaller than Rp =T.
p It p
follows
that when one aims for optimum instrumental conditions, the expression
Rp  Rb is a good quality function. p This
parameter
p is often used as a gure of merit
(FOM). Obviously, the
p phighest value of R p  R b gives the best result. When it can be
assumed that Rp  Rb approximates Rp, for low intensities and high background,
optimizing this term, equals optimizing
M
p 38
Rb
where M (the slope of the calibration line in counts per second per percent) is proportional
to Rp Rb.
If the ratio of two count rates R1 and R2 has to be determined, the methods of xed
time and xed count give the same result, whereas the method of optimum division, where
r
t1 R2
39
t2 R1
always gives the best result.
Copyright 2002 Marcel Dekker, Inc.
The total counting uncertainty is the combination of instrumental error and
counting statistics. When the count rate is very high, the counting uncertainty is small and
it may be worthwhile to apply a ratio method to reduce possible instrumental un-
certainties. However, if the count rate is low, it is better to spend all the available time in
analyzing the specimen to reduce the counting error.

2. Instrumental Errors
If the instrumental and counting uncertainties are random and independent variables, then
q
etot e2instr e2count 40
q
einstr e2tot  e2count 41

Although the counting error is inuenced by the instrumental error, Eq. (41) is still a good
approximation. stot can be found from a series of repeated results of one measurement;
thus, scount is known. To check the instrumental instability, all the functions should be
measured separately. A radioactive source 55Fe, for example, can be used to check the
detector and electronic circuitry. The x-ray tube can be checked by repeating the mea-
surements with specimen and goniometer in a xed position, eliminating errors stemming
from the mechanics of the spectrometer.
In another series of experiments, recycling between dierent angles checks the re-
producibility of the goniometer, whereas repositioning the specimen between measure-
ments checks the specimen holder, the reproducibility of the specimen loading mechanics,
and so forth. A comprehensive series of tests for wavelength-dispersive x-ray uorescence
spectrometers is described in the Australian Standard 2563-1982 (1982). Sometimes the
error stot found is smaller than expected, or even smaller than scount . This may indicate that
an unexpected systematic error is involved, or there may be an uncorrected deadtime in the
equipments, (i.e., the true counting rate is higher than measured, which means that the
relative error is smaller).

3. Detection Limit
A characteristic line intensity decreases with decreasing concentration of the analyte and
nally disappears in the background noise. The true background intensity may be con-
stant, but the results of the measurements uctuate around a mean value Rbmean . To be
signicantly dierent from the background, a signal Rp must, although it is larger than
Rbmean , be distinguished from the spread in Rb . In other words, if we measure a signal Rp
larger than Rb and we assume the analyte is present, what is the probability that our
assumption is correct? If the results of the measurements are random and follow a
Gaussian distribution, then this probability is determined by sRb . If the measurement Rp is
higher than Rb 2sRb , then the probability that our assumption is correct is approxi-
mately 95% if a higher certainty is required (e.g., 99.7%), then Rp should be larger than
Rb 3sRb . Thus, the net intensity is 3sRb and the detection limit, DL, would be
3sRb
DL 42
M
where M is the sensitivity in counts per second per percent. So, the detection limit in the
above equation is the concentration corresponding to a net peak intensity of 3sRb .
However, in x-ray spectrometry, the background signal is specimen dependent and cannot
Copyright 2002 Marcel Dekker, Inc.
be measured independently, as in radioactivity measurements. Hence, Rb has to be mea-
sured in an o-peak location in the spectrum. The result Rb found in measuring time t s is
p
assumed to be Rbmean and sRb is assumed to be Rb =t. Thus, two measurements have to
be made: Rp and Rb , each in time t s. The detection limit thus becomes
p r
3 2 Rb
DL 43
M T
where T 2t. If we are satised with a 95% probability that our assumption is correct,
then
p r
2 2 Rb
DL 44
M T
which is roughly equal to
r
3 Rb
DL 45
M T
It is obvious that the detection limit decreases if the counting time increases. However, the
total error in Rb contains the instrumental error as well. Thus, there is no sense to increase
the counting time when the instrumental error dominates.
Ingham and Vrebos (1994) have shown that the detection limit can be improved by
carefully selecting a primary lter. If the application of such a primary beam lter re-
duces the intensity of the (scattered) continuum from the tube more than it aects the
sensitivity, the detection limit is improved. The loss in sensitivity M needs to be more
than compensated for by the reduction in background intensity; as from Eq. (45), the
detection limit is proportional to the square root of Rb and inversely proportional to the
sensitivity M.

4. Variation in X-ray Spectrum


The uorescent intensity of the analyte Ii is, in rst approximation, dependent on the
primary spectrum according to
Ii KiV0  Vc p 46
where K is a constant, i is the current of the x-ray tube, V0 is the working voltage, Vc the
excitation voltage, and p varies between 1 and 2, depending on the ratio of excitation by
characteristic tube rays and white continuum. In modern instruments, the tube voltage is
not dependent on the mains cycle, but they run on constant potential, which still may
uctuate. If the working voltage V0 , or the region or line of highest excitation probability
is rather close to Vc , then small uctuations in V0 will introduce a considerable error in Ii .
For instance, if V0 1:5 Vc , then a 1% error in V0 gives an error in Ii of 6% when p 2.
It is therefore better to run the tube at three to ve times the excitation voltage of
the analyte. A too high voltage, however, might introduce an unproportionally high
background.

5. Other Instrument Errors


Other possible random instrumental errors include positioning the specimen and setting
the goniometer in wavelength-dispersive x-ray spectrometers. These errors have to be
checked by repeated measurements of one specimen in a systematic way:
Copyright 2002 Marcel Dekker, Inc.
Repeated counting with stationary specimen and goniometer
Repeated counting with stationary specimen and repositioning goniometer
Repeated counting with stationary goniometer and repositioning specimen
Repeated counting with stationary goniometer and reloading specimen holder.
Some diracting crystals have a rather high coecient of thermal expansion;
therefore, their d value may uctuate with uctuations in temperature. This results in a
setting of the goniometer, slightly o-peak, which introduces a change in measured in-
tensity. Therefore, most modern equipment are operating at a stabilized temperature.

6. Particle Statistics
Only a limited volume of the specimen can actually contribute to the uorescent radiation.
As long as this active volume is the same in standards and actual specimens and the atomic
distribution is completely homogeneous, this poses no problem. However, the atoms are
bound into chemical compounds, forming nite particles with dierent chemical compo-
sitions. The analyte may only occur in particles with a certain chemical composition and
not in other particles. Then, only these specic particles can contribute to the uorescent
radiation of the analyte i. Therefore, the count rate Ri measured depends on the number of
those particles present in the active volume, where, evidently, the rst layers contribute
most of the uorescent radiation.
Table 2 gives an indication of the penetration depth of radiation of various wa-
velengths into matrices with varying absorption power. It is evident that for most solid
specimens, the uorescent radiation originates within 20 mm or less from the surface. To
get an idea of how many particles can actually contribute to the uorescent intensity of
analyte i, let us assume that the irradiated area of 10 cm2 is covered with cubic particles
of 100 mm dimension in a random fashion. Assuming a lling factor of 0.8 and assuming
that the analyte i is only present in 10% of the particles, then 10  108  104  101 
0:8 8000 particles could be actually contributing. Assuming
p a Gaussian distribution,
then this number would have a standard deviation of 8000 90 particles, or a relative
standard deviation of approximately 1.1%. If the concentration of analyte is only 1%,
then this relative standard deviation would be roughly 3.3%. In practice, these errors
might even be larger, as the radiation of the specimen is not homogeneous because the
primary spectrum originates in a rather small anode and passes through a large window
and is, thus, conically shaped. Spinning the specimen in its own plane during the ana-
lysis will reduce this error. Furthermore, the rst layers are the most eective, having

Table 2 Innite Thickness (in mm for Certain Analytical Lines as a Function of the Matrix)

Analytical line Fe base Mg base H2O solution Borate Borate La2O3 10%

SnKa 300 10,000 100,000 70,000 10,000


MoKa 100 3,700 30,000 30,000 2,600
NiKa 12 340 2,400 2,000 300
CrKa 33 120 900 800 250
AlKa 1.5 4 10 15 5
NaKa 0.7 20 9 6 5
CKa 0.3 0.3 4 1 0.4
Note: Both the incidence and exit angles are 45 and excitation is by a Rh tube at 60 kV. The inuence of element-
specic absorption can be seen from, for example, the values for NiKa and CrKa in the Fe-base matrix.

Copyright 2002 Marcel Dekker, Inc.


only a small number of particles containing the analyte with corresponding larger re-
lative errors.
In extreme cases, it is evident that the eective layer is very thin, less than 1 mm. Care
should thus be taken that the specimen surface is as smooth as possible; as surface irre-
gularities (e.g., grooves, ridges) will introduce a considerable error. Spinning the specimen
will, again, reduce this error.
It is therefore vital that the specimen be completely homogeneous. If this is not
possible and powders have to be analyzed, care should be taken that the particles are very
small, less than a few micrometers in diameter. Specimen preparation will be discussed
in full detail in Chapter 14.

C. Systematic Errors
1. Dead T|me
After an x-ray photon is detected in the counter and accompanying electronics, it takes a
certain time before the counting circuit is ready to accept the next photon. Any photon
entering the counter within this period, called the dead time of the counter circuit, is
simply not registered and is thus lost. This dead time is of the order of a few microseconds.
The counting losses are thus dependent on the actual count rate. The measured count rate
Rm is always lower than the true count rate RT . Their relation can be approximated by the
expression
Rm
RT 47
1  td Rm
where td is the dead time. For instance, if td 1 ms and Rm 105 counts per second, the
dead-time loss is approximately 10%. With modern equipment, very high count rates can
be handled, to reach sucient precision in a short time. It is therefore necessary to reduce
these losses. In most wavelength-dispersive instruments, an automatic dead-time correc-
tion circuit is included. Energy-dispersive instruments, on the other hand, tend to collect
counts for the specied time; thus, the measurement takes longer because the total time
required in this case consists of the specied measuring time (lifetime) and the dead time.

2. Matrix Effects
The uorescent intensity of the analyte i is, as discussed earlier, not only dependent on its
concentration but can also be strongly dependent on the composition of the specimen
itself. The primary rays will be absorbed and scattered and secondary uorescence may
occur. All of these eects depend on the chemical composition of the specimen. The
importance of these eects depends on the concentration of these matrix elements and
their inuence (e.g., their absorption of primary and secondary x-rays). These matrix ef-
fects may introduce large systematic errors when they are not properly accounted for, as
discussed in Sec. V.

D. Choice of Optimal Conditions


1. Selecting the Analytical Line
Some considerations in choosing the analytical line are the following:
1. High sensitivity, thus preferably the strongest line in the emission spectrum is
used; this is commonly the Ka
Copyright 2002 Marcel Dekker, Inc.
2. Low intensity of background radiation
3. Constant angle of Bragg diraction (wavelength-dispersive instruments)
4. No coincidence (line overlap) with lines of other elements
Each of these items will be discussed in more detail below.
1. For the light- and medium-Z elements, the Ka line is by far the strongest line in
their spectrum and is therefore often chosen as the analytical line. For the
elements with K-edge energies exceeding 40 keV, the L lines are preferred
because the K lines, in general, cannot be used, the maximum voltage for most
spectrometers being limited to 100 kV or less. Typically, a voltage of three times
Vc or more is needed to get a high characteristic intensity. Furthermore, the
resulting K lines are very energetic and are only detected with mediocre
eciency by the Si(Li) detectors or NaI scintillation detectors. Ge detectors
have a higher eciency for that energy range and these are the preferred
detectors.
2. Another reason the L lines are preferred for the heavier elements is that in direct
tube-excited XRF, the background due to scattering of the primary x-rays is
much lower in the L region of the spectrum.
3. The wavelength of some analytical lines may shift slightly with the valence state
of the elements, especially for the light elements and the L lines of the transition
metals (Wood and Urch, 1978); thus, the standard used in setting the goniometer
to the analytical line should correspond to the specimen in this respect. Another
reason for an apparent shift in angle may be the change in d value of analyzing
crystal with temperature.
4. The analytical line should ideally be completely free of any disturbing lines.
However, there are many sources of distributing inuences; some of these are described
next.
The following sections deal with line overlap in the case of wavelength dispersive
spectrometers. For energy dispersive spectrometers, please refer to Chapter 4.

2. Spectral Overlap
Two or more characteristic lines may not be completely separated from the analytical line.
This separation may be improved by using a crystal with better dispersion (e.g., a lower d
value). However, the choice must often be made between high intensity and high disper-
sion. If the disturbing line is due to a high-order crystal reection, its inuence may be
strongly reduced by the proper setting of the pulse-height selector. However, in some
cases, the escape peak of the interfering line may be within the pulse-height selector
window. For instance, the third-order reection using a penta erythritol (PE) crystal, of
the characteristic tube lines of a Sc anode, scattered by the specimen will slightly interfere
with the analysis for Al in a light matrix, as the energy of the ScKa escape peak in an Ar-
lled gas detector is very close to the energy of the AlKa line.
Often, the overlap is due to a diagram line of an element of which another diagram
line is free of overlap. As, in general, two diagram lines of one element have a constant
intensity ratio, the measured intensity of the nonoverlapped line of the disturbing element
multiplied by a constant factor (experimentally determined) may be subtracted from the
measured intensity of the analytical line to give the characteristic intensity. Absorption
eects can, however, strongly inuence the ratio. This is most clearly the case when there is
an absorption edge of a major element between the two diagram lines considered.
Copyright 2002 Marcel Dekker, Inc.
3. Primary Radiation Scattered by the Specimen
Photons of all wavelengths present in the white spectrum of the incident beam are scat-
tered by the specimen, including the characteristic lines of the tube anode, giving rise to a
continuous background. If, however, the specimen consists of rather coarse grains, it may
happen that a crystallite is in a favorable position for Bragg diraction for a wavelength of
the continuum; thus, a sharp peak will be found in the spectral analysis. The inuence of
primary tube lines, coherently or incoherently scattered, may be eliminated by the proper
choice of the anode.

4. Spurious Reflections by the Analyzing Crystal


LiF(110) is a common analyzing crystal. The second order of reection is used because the
rst order is crystallographically forbidden; therefore, the actual planes used for the dif-
fraction are the 220 planes. Hence, some manufacturers refer to it as the LiF(220) crystal;
the same applies to the LiF(100) and LiF(200) denominations. However, when a very, very
high intensity is observed, a rst-order reection will still be found, due to asymmetry of
the electronic cloud. Similarly, a second order may be observed using a Ge crystal.

5. Satellite Lines
The common wavelength tables give only characteristic K, L, and M lines of most ele-
ments. The M lines of heavy elements may interfere with the K lines of light elements.
Nondiagram satellite lines may also occur, often giving rise to an unexpected background
level. One of the tables that includes such lines is provided by NIH (Bethesda, MD)
(Garbauskas and Goehner, 1983).* The lines that are most obvious to observe are some of
the satellites lines of Al, Si, and P with a wavelength-dispersive spectrometer. In Figure 3,
a spectrum over aluminum is shown, on which some of these lines have been annotated.

IV. CALIBRATION AND STANDARD SPECIMENS


A. Introduction
As shown earlier, standard specimens must cover the concentration range of interest, be
stable over time, and have a certied composition. They are, however, not the only spe-
cimens required to set up a calibration for routine use and to maintain it over extended
periods of time. Quality control specimens (also called quality assurance specimens) are
used to assess the quality of the analysis obtained over time, whereas drift-correction
monitors are used to correct for long-term drift of the equipment. Finally, recalibration
standards can be used if the calibration graph must be reconstructed.

B. Quality Control Specimens


The use of at least one sample with known composition to assess the accuracy at the time
of the calibration is highly recommended. The sample(s) used for this purpose should be
typical for the unknowns and should not be used for calibration to avoid biases and overly
optimistic estimates or accuracy. Furthermore, it is a sound practice to select at least one
specimen, with a composition similar to the unknowns, as a quality control specimen to
verify the repeatability over extended periods of time. Should the results on the quality

*Database made available through C. Fiori, National Institutes of Health, Bethesda, Maryland.

Copyright 2002 Marcel Dekker, Inc.


Figure 3 Spectrum of aluminum-containing specimen. The peaks are the following emission lines:
(1) AlKb1, Kb3 doublet; (2) AlSKb0 ; (3) AlSKa7; (4) AlSKa5; (5) AlSKa4; (6) AlSKa3; (7) AlKa1,
Ka2 doublet. The satellite lines are explicitly labeled with S. Note the logarithmic scale on the
intensity axis. Conditions: wavelength-dispersive spectrometer with PE crystal, Rh tube.

control specimen fall outside a predetermined range, then adequate measures must be
taken. The rst step is usually to perform a correction for drift. DeGroot (1990) has de-
scribed how the use of statistical process control (SPC) can be benecial in this respect.
The SPC charts that can be maintained in this way allow one to check the performance of
the spectrometer system.

C. Drift-Correction Monitors
Correction for drift can be made by measuring selected specimens for each of the ana-
lytes and calculate the ratio between the observed intensities and those obtained when the
calibration was performed. The ratios can then be applied as a correction to the slope
of the calibration graphs or, as is done more often, the measured intensities of the
unknowns are corrected for drift, prior to the conversion to concentration. If drift
correction is performed regularly (e.g., once a day), the drift between subsequent
measurements is very small and, thus, measurements with high precision are required,
otherwise the counting statistical error of the measurement would become dominant.
Drift correction should not even be applied unless the correction is signicant. Because
the precision of an intensity measurement is determined by the number of counts col-
lected, drift-correction monitors are, ideally, specimens on which high count rates can be
obtained. Drift-correction monitors do not have to be specimens with a composition
similar to the unknowns.
As dierent components of the spectrometer (such as the x-ray tube) age, not only will
the sensitivity be aected (this is generally a downward trend), but in some cases, the
background will vary also. To correct for this change in background intensity, measure-
ments of the background must be performed. This poses a problem, inasmuch as the count
Copyright 2002 Marcel Dekker, Inc.
rate on the background is usually low and thus measurements are not vary precise or they
require a long measuring time. However, the considerations on the counting statistical error
made earlier (Sec. III.B.1) oer some suggestions. First, the intensity of the background is
irrelevant when the intensity of the analyte peak is much higher. Obviously, this has to
apply for all specimens to be measured. If that is the case, then variations in the background
intensity are also negligible. When the background matters, it should already be measured
anyway and then drift correction on the background intensity is not required, as the net
count rate is corrected for drift. The only case that is not covered here is the contribution of
spectral contamination from the x-ray tube (e.g., Cu, W, Ag, Fe, etc.). In that case, the
background, including the contamination, must be measured on peak. This implies that
measurements must be done for specimens with zero analyte concentration. Again, two
cases can be distinguished: If the background including the contamination is not important
compared to the count rate observed, then no corrections are required. On the other hand,
if one of the analytes is present at low concentration, then the contribution to the back-
ground due to the contaminant must be checked periodically and taken into account.
After the drift correction is performed, a quality control specimen should be mea-
sured to verify the procedure.

D. Recalibration Standards
Sometimes (e.g., after a major maintenance on a spectrometer), drift correction does not
bring the quality control specimens back in line with the expectations. In those cases, the
calibration curve must be reconstructed. This can be done by measuring all the standard
specimens again and repeating the complete calibration procedure. Because calibrations
often use many standards and validating each calibration is required, this can be a time-
consuming process, even if the validation is limited to a quick visual inspection of the
calibration graphs. In such cases, a recalibration can be performed based on only a few
standards. The idea of recalibration is to reconstruct the calibration graph, without having
to measure all the standards again. This is done by selecting a few standard specimens for
each analyte and measuring these (the top and bottom point in Fig. 4). Subsequently,
when determining the parameters (such as slope and intercept) of the regression, for the
concentrations of these specimens in the calibration the certied values are no longer used,
but the values as found on the calibration line at the time of the original calibration (the
x-ray values) are. These x-ray values have been found based on all standard specimens
used, and the idea is to x the calibration line again through these points. As a result,
the statistical data are now skewed, but the values for the slope and the intercept are very
close to the original ones; the small dierences between old and new values are due to the
counting statistical errors in the measurements and these are also present when unknowns
are measured. The specimens used for recalibration have also been used for the calibra-
tion. The only requirement is that the recalibration specimens have count rates that are
dierent enough so that the determination of the slope is accurate enough. For each of the
calibration lines, the number of selected specimens must be at least the same as the number
of parameters to be determined. If the slope and intercept are determined, at least two
recalibration standards are required. However, if in this case three or four standards are
used, it is possible to detect gross counting artifacts (e.g., caused by mislabeled standards,
incorrect loading, etc.). The root mean square error or the correlation coecient on the
calibration line (which can only be calculated if more specimens are used than parameters
determined) has no relationship with the accuracy of the analysis. In fact, a near-perfect
correlation should be obtained. As in the case of drift correction, it is recommended to
Copyright 2002 Marcel Dekker, Inc.
Figure 4 The original calibration line, based on seven data points (h) can be reconstructed using
only two data points (in this case, top and bottom), with concentrations (x-ray values) modified to
those obtained on the calibration (j).

measure, after recalibration, the quality control specimen(s), as gross errors might thus be
identied before unknown specimen are analyzed.

E. Conclusion
Setting up a calibration to be used over extended periods of time requires considerable
amounts of work and preparation. It involves not only the selection and procurement
of standard specimens but also drift correction monitors and recalibration standards.
Also, the specimen preparation method is an essential part of the whole procedure.
The result, however, is the ability to produce quantitative results to a previously as-
sessed degree of accuracy and precision over extended periods of time with minimal
work, once the specimen preparation procedure is set up and the initial calibration is
performed.

V. CONVERTING INTENSITIES TO CONCENTRATION


A. Introduction
The simplest equation relating intensity to concentration is
Ii Ki0 Wi 48
where Ki0 is assumed to be a constant. The equation holds in general for cases where the
total eect of the matrix on the analyte i is constant (e.g., elements at minor and trace
concentration levels in low-alloy steels or thin-lm specimens). The intensity Ii in Eq. (48)
is a net intensity: measured intensity corrected for background, line overlap, and so forth.
Copyright 2002 Marcel Dekker, Inc.
In practice, the measured intensity is often used directly without the subtraction of
background, leading to a more general equation:
Ii Bi0 Ki0 Wi 49
where Bi0 is the measured intensity when Wi 0. If there is no uncorrected line overlap, Bi0
is the background. This equation can be rearranged to
Wi Bi Ki Ii 50
The constant Ki is called the sensitivity and is expressed in counts per second per unit
concentration (e.g., percent, mg=L, etc.).
The most common method of determining the constants Bi and Ki (or Bi0 and Ki0 )
is linear regression on a number of standard specimens. Linear regression can be done
by minimizing the sum of squared residuals of W, or I; see Figure 5. In theory, the
method of least squares assumes that the errors of the dependent variable [W in Eq.
(50)] are normally distributed. The two lines obtained (one by minimizing DW, the other
by minimizing DI ) are not the same. Because, for the analysis, the intensity Ii is mea-
sured, it is recommended to minimize for Wi . Also, in general, the relative error of
the measured intensities is smaller than the relative error of the concentrations in the
standard specimens. This is especially true for the determination of trace elements. The
following formulas can be used for the determination of the values of the parameters
Ki and Bi :
Wi Bi Ki Ii 50
Pn Pn Pn
j1 Wij Iij  j1 Wij Iij =n
Ki Pn 2 Pn Pn j1 51
j1 Iij  j1 Iij j1 Iij =n

Figure 5 Straight lines through the data points can be determined by minimizing either the sum of
the squares of the residuals DI or DW.

Copyright 2002 Marcel Dekker, Inc.


and
Pn Pn
j1 Wij  Ki j1 Iij
Bi 52
n
where the sums are over the standard specimens, j 1; 2; . . . ; n, with n the number of
standard specimens used for analyte i. Equally important, however, are the variances on
the parameters determined. Formulas to calculate the variances can be found in the lit-
erature [Draper and Smith (1966)], and many commonly used computer programs such as
spread sheets and statistical packages include the relevant calculations as well. Some
important conclusions are as follows:
1. The concentrations of the standard specimens must cover the expected range of
concentrations.
2. The calculated concentration is more accurate at the center of the line than at the
extremities; the estimated variance for Wx increases with (Wx  Waverage).
3. Because a calibration line is derived using data from several standards, the
analysis of the unknown can sometimes be more accurate than the accuracy of
the individual standard specimens; this is due to the eect of averaging.
When the background is properly subtracted from the gross count rate, the background Bi
is equal to zero and Eq. (50) reduces to
Wi K i I i 53
In this case, the value for the slope Ki is found from
Pn Pn
j1 Wij Iij
Ki Pn 2j1 54
j1 Iij

B. Matrix Effect

Applying Eq. (50) or (53) requires that all standards must be similar to the unknown in all
aspects considered: matrix eect, homogeneity, and so on. This would lead to the use of
standard specimens with a very limited concentration range. Such a requirement is in
disagreement with the observation that the variance on the slope factor is smaller with
increasing range: The use of standard specimens covering only a small concentration range
will lead to a calibration graph with large uncertainty on the slope and intercept. On the one
hand, this advocates the use of a set of standards with a wide range of concentrations; on
the other hand, the requirement of similarity in matrix eects tends to limit the range.
Obviously, a compromise must be made. Equation (50) is a simplication of the more
general equation describing the relationship among analyte concentration Wi , specimen
homogeneity Si , measured intensity Ii , and matrix eect Mi :

Wi Ki Ii Mi Si 55
The term specimen homogeneity also includes the grain size eect and the mineralogical
eect. These are notoriously dicult to treat mathematically; in fact, most methods de-
scribing the grain size eect rigorously assume, for example, the dispersed phase to be
perfect spheres of a given diameter or an arrangement of cubes (Bonetto and Riveros,
1985). Other methods allow more variability, but these also require a priori more in-
formation about the specimen, such as the composition of the individual granular phases,
Copyright 2002 Marcel Dekker, Inc.
the average shape and size of the phases, and so forth (Hunter and Rhodes, 1972; Lubecki
et al., 1968; Holynska and Markowicz, 1981). The fact that the specimen homogeneity is as
yet not described by a single successful method is one of the reasons that Eq. (55) is
commonly reduced to

Wi K i I i Mi 56
Fortunately, by using adequate specimen preparation methods the eect of Si between
specimens (standards as well as unknowns) can be rendered constant. This constant factor
is then absorbed by the sensitivity Ki .
As a rst approximation, the degree of variation in matrix eect between two spe-
cimens for a given analyte i can be estimated by calculating, for both compositions, the
following parameter:
mi
Ii Pi 57
ms l0 Gms li

where ms l0 and ms li are the mass-attenuation coecients of the specimen considered


for wavelengths l0 and li , respectively, and G is the geometrical factor for both compo-
sitions. The relative dierence between these expressions should not exceed a few percent,
otherwise the matrix eects become too important to ignore. If the specimens include ux
or a binding agent, then these must also be considered in Eq. (57).
With increasing range of concentrations, deviations from linearity will be observed
due to variations in matrix eects between specimens and standards. The analyst must
then resort to other methods to obtain accurate results.
Matrix eects are studied most easily by considering binary systems (i.e., specimens
with only two elements or compounds).
In the case of absorption (both primary and secondary absorption must be con-
sidered), three cases can be distinguished (see Fig. 6):

1. A simple, linear relationship between relative intensity R and weight fraction W


(Curve 1 in Fig. 6). In this case, there is no matrix eect: The analyte and the
matrix have very similar (in principle, the same) attenuation coecients for the
incident and the characteristic radiations.
2. Curve 2 in Figure 6 is obtained when the matrix has a higher attenuation
coecient for the analytes characteristic radiation than the analyte itself: The
characteristic radiation is primarily absorbed by the matrix element. This is
usually called positive absorption.
3. Curve 3 in Figure 6 is obtained when the matrix absorbs less than the analyte
itself: The matrix has a smaller value for the attenuation coecient for the
characteristic radiation than the analyte itself. This can be the case, for example,
if the matrix has a much lower atomic number than the analyte, such as Mo in
Al. This eect is called negative absorption.

Enhancement will generally lead to a calibration graph like curve 4 in Figure 6. The
eect of enhancement is usually smaller than that of positive or negative absorption, as
indicated by the position, relative to curve 1, of curves 2 and 3 and curve 4.
It can be shown that the behavior of the calibration curves can be explained in terms
of attenuation coecients only if absorption is the only matrix eect. Furthermore,
if monochromatic excitation is used, a single constant (calculated from attenuation
Copyright 2002 Marcel Dekker, Inc.
Figure 6 Calibration curves for binaries. Curve 1: no net matrix-effect; curve 2: net absorption of
the analytes radiation by the matrix (positive absorption); curve 3: net absorption of the analytes
radiation by the analyte (negative absorption); curve 4: enhancement of the analytes radiation by
the matrix.

coecients) suces to express the eect of one element on the intensity of another. In the
following section, various methods to deal with matrix eects will be discussed.

C. Elimination or Evaluation of the Total Matrix Effect: Compensation Methods


1. Scattered Radiation: Compton Scatter (in cooperation with Mark N. Ingham,
British Geological Survey, Keyworth, UK)
If the variation in matrix eects is mainly due to absorption, scattered x-rays can be used
to obtain an estimate of the absorption coecient of the specimen at a certain wavelength
ls . The intensity of the scattered radiation can be shown to be inversely proportional to the
mass-attenuation coecient ms of the specimen:
1
Is ls 58
ms ls
where ls is the wavelength of the scattered radiation and ms ls is the mass-attenuation
coecient of the specimen for wavelength ls . This is illustrated in Figure 7. The intensity
of the uorescent radiation, Ii ; is also inversely proportional to the mass-attenuation
coecient [Eq. (57)], but at a dierent wavelength:
Wi Wi
Ii  59
ms l0 Gms li ms
where

ms ms l0 Gms li 60

Copyright 2002 Marcel Dekker, Inc.


Figure 7 The intensity of Compton-scattered radiation is inversely proportional to the mass-
attenuation coefficient (mac) of the specimen. Conditions: Rh tube at 60 kV.

Mass-attenuation coecients at two dierent wavelengths are virtually proportional,


independent of matrix composition, provided there are no signicant absorption edges
between the two wavelengths considered (Hower, 1959). Hence, the ratio Ii =Is is pro-
portional to the concentration of the analyte.
Both coherently and incoherently scattered primary radiation, such as tube lines
for tube excited x-ray uorescence (XRF), as well as the scattered continuous radiation
can be used. The method corrects also, to some degree, for surface nish, grain size
eects, and variations in tube voltage and current, but it does not correct for en-
hancement, thus limiting its use to analytes that are inuences by absorption only.
Furthermore, no absorption edges of major elements may be situated between the two
wavelengths considered. If that is the case, the ratio between the mass-attenuation
coecients for the wavelengths considered depends to a large degree on the concen-
tration of the major element(s) and the ratio is no longer constant between specimens.
This reduces the range of analytes that can be covered using the scattered tube lines. The
use of scattered continuum radiation close to the analyte peak may be disadvantageous
due to limited intensity, leading to either long measurement times or poor counting
statistics.
The intensity of the scattered Compton radiation is higher for specimens mainly
consisting of low-atomic-number elements than for specimens with a higher average
atomic number. This is illustrated in Figure 8, where the scattered radiation is plotted
as a function of wavelength for two specimens: iron and magnesium. In this experi-
ment, the spectrometer was equipped with a Rh anode x-ray tube. The two sharp
peaks that can be observed are the RhKb (at 0.0546 nm) and the RhKa (at 0.0613 nm),
respectively. The other two, much broader peaks are the Compton-scattered RhKb and
the RhKa. The Compton-scattered peaks are much broader than Rayleigh-scattered
Copyright 2002 Marcel Dekker, Inc.
Figure 8 Intensity of scattered radiation as a function of wavelength. Two different specimens
have been used: iron and magnesium. A Rh anode tube was used in this experiment. Note the large
difference in intensity between the specimens. For the specimen consisting of the element with the
higher atomic number (Fe), the intensity of the Compton (incoherent)-scattered radiation is lower
than the intensity of the Rayleigh (coherent)-scattered radiation, compared to the specimen with the
lower-atomic-number element (Mg).

characteristic lines. They are also shifted toward longer wavelengths by an amount Dl,
which is given by
Dl 0:002431  cos c 61
0 00
where c is the angle through which the radiation is scattered: c c c and Dl is ex-
pressed in nanometers.
In the spectrometer used for the recording of the spectra in Figure 8, c is 100 .
The Compton shift, Dl, is thus 0.0029 nm for this conguration. The maxima of the
Compton peaks (at 0.0576 nm and 0.0644 nm) in Figure 8 are in good agreement with
the theoretical values: 0.0546 nm 0.0029 nm 0.0575 nm and 0.0613 nm 0.0029 nm
0.0642 nm, respectively.
The intensity of the Compton-scattered radiation is higher for shorter wavelengths
and for specimens consisting of elements with low atomic numbers. For a given wave-
length (e.g., the characteristic radiation of a tube line), the intensity of Compton scatter
decreases as the specimen consists of more and more elements with higher atomic
numbers. For specimens made up of oxides, the scattered intensity is usually so intense
that it can be measured with sucient precision in a relatively short time. On the other
hand, for specimens made up predominantly of heavier elements, such as steel and even
more so for brasses and solders, the intensity of the scattered radiation is very low (see
Fig. 8) and counting the statistical error can preclude precise analysis in a reasonable
amount of time. The most common application where this approach (or a variant
thereof) is used is in the determination of trace elements in specimens of geological
origin. This is illustrated in Figure 9 for the determination of Sr in specimens of widely
Copyright 2002 Marcel Dekker, Inc.
varying geological origin. In Figure 9a, the net count rate for SrKa is plotted against the
concentration for a large number of specimens. There is a considerable spread around
the calibration line established. The scatter is greatly reduced when the net count rate of

Figure 9 (a) Net count rate of SrKa as a function of Sr concentration for a large number of
specimens of varying geological origin. There is considerable spread around the calibration line.
(b) The ratio of the count rates of the SrKa radiation and the Compton-scattered tube line is plotted
against the concentration of Sr. The spread of the data points around the calibration line is now
much reduced compared to (a); this is especially the case for the point labeled A.

Copyright 2002 Marcel Dekker, Inc.


SrKa is divided by the count rate of the RhKa Compton-scattered tube radiation, as
indicated in Figure 9b.
Feather and Willis (1976) have shown that the intensity of the Compton peak can
also be used as an estimate for the background under characteristic lines; this eliminates
measuring the intensity of the background near the peak. With this method, not only is the
measurement time per specimen reduced (using WDS) but the dicult task of nding
interference-free background positions is no longer required.

2. Internal Standard
In this method, an element is added to each specimen in a xed proportion to the original
sample. This addition has to be made to the standard samples as well as to the unknowns.
The characteristic radiation of the element added should be similar to the characteristic
radiation of the analyte in terms of absorption and enhancement properties in the matrix
considered. Such an element is called an added internal standard, or internal stan-
dard for short. In practice, the method works equally well if a pure element or a pure
compound is added, or if a solution with the internal standard element is used. If a so-
lution is used, care must be taken that the solution itself does not contain any elements
that are to be analyzed. The composition of the solution used as the additive must be
constant, otherwise it might aect the matrix eect. The intensity of the internal standard
is aected by matrix eects in much the same way as the intensity of the analyte, provided
there are no absorption edges (leading to dierence in absorption) or characteristic lines
including scattered tube lines (leading to dierence due to enhancement) between the two
wavelengths considered. Because
Wi K i I i Mi 62
for the analyte i and
Ws Ks Is Ms 63
for the internal standard s, the following ratio can be obtained by dividing Equation (62)
by Equation (63):
Ii
Kis Wi 64
Is
where
K s Ms
Kis 65
K i Mi W s
Because the same amount of internal standard is added to all specimens, Ws is essentially a
constant and can be included in the constant Kis. It should be noted that Mi (and Ms) is
not a constant over the concentration range of interest (otherwise linear calibration would
suce) but depends on the matrix elements. However, if both Mi and Ms vary in a similar
manner with the matrix elements, the ratio Mi=Ms is less sensitive to variation in the
matrix eect and, in practice, can be considered a constant. In practice, the constant Kis is
determined using linear regression. The main advantage of the internal standard method
over the scattered-radiation method is its ability to correct eectively for enhancement as
well as for absorption. It also correctsat least partiallyfor variations in density of
pressed specimens. The requirement that the intensity of the characteristic radiation of
both the analyte and the internal standard element vary in the same manner with the
Copyright 2002 Marcel Dekker, Inc.
matrix eects imposes that there should be no absorption edges and no characteristic
radiation from other elements between the measured line of the analyte and that of the
internal standard. Furthermore, ideally, the analyte should not enhance the internal
standard or vice versa. If Ka radiation is measured and if the atomic number of the analyte
is Z (with Z > 23), then, very often, the elements with atomic number Z  1 or Z 1 are
very good candidates. This assures that there are no K absorption edges and no K emission
lines of other elements between the two elements considered. The element with atomic
number Z is not enhanced by Ka radiation from an element with Z 1, but only by the
much weaker Kb radiation, whereas for elements with atomic number Z 2 and higher,
both Ka and Kb contribute to enhancement. In practice, some enhancement between the
internal standard element and the analyte or vice versa is allowed, as the concentration of
the internal standard is constant and the ratio is based on intensities. The absence of major
elements in the specimens with L absorption edges and emission lines, however, must be
checked for. The situations that must be avoided are (1) the case where a major line of a
major matrix element is between the absorption edges of the analyte and the internal
standard and (2) the case where a major absorption edge of a major matrix element is
situated between the measured characteristic lines of the analyte and the internal standard.
In the rst case, the matrix element would enhance either the analyte or the internal
standard element, but not both; in the second case, the matrix element absorbs strongly
either the radiation from the analyte or the internal standard, but not both. In both of
these cases, varying concentrations of the matrix element will lead to variable and dierent
eects on the intensities of the analyte and of the internal standard, and the ratio used in
Eq. (64) will not compensate for such events.
The method, however, has some important limitations:
The specimen preparation is made more complicated and is more susceptible to
errors.
The addition of reagents and the requirement of homogeneity of the specimen tends
to limit the practical application of the method to the analysis of liquids and
fused specimens, although it sometimes nds application in the analysis of
pressed powders.
Although the rule Z 1 or Z  1 can serve as a rule of thumb, it is quite clear that for
samples where many elements are to be quantied, a suitable internal standard
cannot be found for every analyte element. Sometimes, more elements are used
in one internal standard solution to provide suitable internal standards for
more analytes.
Also, the fact that the internal standard method is easier to apply to liquids can
generate some problems. Heavier elements (e.g., Mo) are more dicult to
determine using this method, because liquid specimens are generally not of
innite thickness for the K wavelengths of these heavier elements. In such cases,
the L line can be used, with an appropriate internal standard. The method will,
however, also provide some compensation for the eects of noninnite
thickness, especially if the wavelength of the internal standard selected is very
similar to that of the analyte line.
Theoretically, L lines of a given element can be used as internal standards for K lines
of other elements and vice versa if these wavelengths are reasonably close to each other
and neither interfering lines nor edges occur between them. In principle, the method allows
the determination of one or two elements in a specimen without requiring analysis (or
knowledge) of the complete matrix.
Copyright 2002 Marcel Dekker, Inc.
The range of concentration over which this method is suitable can be quite large, up
to 1020 % in favorable situations, but the internal standard technique is most eective at
low concentrations (or high dilutions). The method nds, for instance, application in the
determination of Ni and V in petroleum products, where MnKa is used as an internal
standard (ISO, 1995). The internal standard method allows an accurate determination of
these elements in a much wider variety of petroleum products than the method based on
linear calibration.

3. Standard Addition Methods


Another method of analysis involves the addition of known quantities of the analyte to the
specimen and is referred to as the standard addition method.
If the analyte element is present at low levels, and no suitable standards are available
(e.g., the matrix is unknown), standard addition and=or dilution may prove to be an al-
ternative, especially if the analyst is interested in only one analyte element. The principle is
the following: Adding a known amount of the analyte i DWi to the unknown specimen
will give an increased intensity Ii DIi . Assuming a linear calibration, the following
equations apply:
Wi K i I i 62
for the original specimen and
Wi DWi Ki Ii DIi 66
for the specimen with the addition. Thus, the method assumes that linear calibration is
adequate throughout the range of addition, because it assumes that an increase in the
concentration of the analyte by an amount DWi will increase the intensity by Ki DIi .

Figure 10 Standard addition method. The net intensity is plotted versus weight fraction of the
element added to the sample and a best-fit line is determined. The intercept of that line with the
concentration axis is Wi.

Copyright 2002 Marcel Dekker, Inc.


These equations can be solved for Wi. To check the linearity of the calibration, the process
can be repeated by adding dierent amounts of the analyte to the specimen and plotting
the intensity measured versus the concentrations added (Fig. 10). The intercept of the line
on the concentration axis equals Wi. Note that the concentration in the unknown is
actually found through extrapolation of the linear calibration toward zero intensity. The
intensities used for calibration must be corrected for background and line overlap. If this
correction is not performed (or not performed accurately), the value of the concentration
determined will be overestimated by an amount proportional to the intensity of the
background (or line overlap) and inversely proportional to the sensitivity.
The method is suitable mainly for determination of trace and minor concentration
levels, as the amount DWi added to the sample must be in proportion to the amount Wi in
the sample itself. The extrapolation error can be quite large if the slope of the line is not
known accurately. Adding signicant amounts of additives to the sample might, however,
lead to nonlinearity, as it will alter the matrix eect.
Compounds and solutions can be used for the standard addition. If the analyte i in
the original sample is in a dierent phase than in the additive, care must be taken in the
calculation of the concentration sought. The relevant stoichiometric or gravimetric factors
must be included. This also applies if the analyte is present under elemental or ionic form
in, for example, the original sample and in a compound phase in the additive, or vice versa.
Another way to alter the concentration of the analyte is by diluting the liquid or solid
solution of the sample. By diluting several times by known amounts, a line can be es-
tablished. By repeating this procedure with a standard solution containing a known
amount of i, the unknown concentration can be found.

4. Dilution Methods
Dilution methods can also eliminate or reduce the variation of the matrix eect, rather than
compensating for such variation. The dilution method can be explained using Eq. (17):
mi l0
Cl0 ; li Pn Pn 17
j1 Wj mj l0 G j1 Wj mj li

which can be rewritten as


mi l0
Cl0 ; li 67
ms l0 Gms li

where ms l0 and ms li are the mass-attenuation coecients of the specimen for the pri-
mary wavelength l0 and analyte wavelength li, respectively. Apparently, deviations from
linearity are due to variations in ms l0 and=or ms li . Enhancement is ignored at this
stage. If one adds, to the sample, D grams of a diluent (d ) for each gram of sample, the
denominator or Eq. (67) becomes
1 D
m l0 Gms li m l0 Gmd li 68
1D s 1D d
If the term D=1 Dmd l0 Gmd li is much larger than 1=1 Dms l0 Gms li ,
the factor Cl0 ; li becomes essentially a constant and variations due to varying matrix
eects between samples become negligible. This can be done in two ways.
(a) Making D=1 D large by diluting each sample by adding a large, known
amount of a diluent.
Copyright 2002 Marcel Dekker, Inc.
(b) Adding a smaller quantity of diluent than in the previous case, but with a much
larger value for md l0 Gmd li . This is called the technique of the heavy
absorber.
Both of these procedures, however, require the addition of reagents to the sample. This
can easily be done for dissolved samples, either in liquids or fused samples, but it is more
dicult for powdered samples (homogeneity!).
These methods do not eliminate the matrix eects completely, but reduce their
inuence. On the other hand, they also reduce the line intensity of the analyte; thus, a
compromise must be sought.
Dilution methods also have the advantage of reducing the enhancement eect if one
uses a nonuorescing diluent (e.g., H2O or Li2B4O7). In this case, the eect is reduced by
the fact that the concentrations of both the enhancing element and the analyte are reduced.
If the diluent contains elements whose characteristic x-rays can excite the analyte, as well
as some other matrix elements, then the contribution of those unknown quantities of
matrix elements to the total enhancement is reduced: The enhancement of the analyte by
the diluent would then be determining and can be considered to be constant.
This method allows the determination of all measurable elements in the sample, as
opposed to the standard addition method, where an addition must be made for each
element of interest.

D. Mathematical Methods
1. General
The term mathematical methods refers to those methods that calculate rather than
eliminate or measure the matrix eect.
Mathematical methods are independent of the specimen preparation in the sense that
specimen preparation is taken into account if the composition of the specimen presented to
the spectrometer has been changed (e.g., by fusion), but mathematical methods do not
prescribe the specimen preparation method as is done, for example, by the standard ad-
dition method. The actual calculation method used to convert intensities to concentrations
does not aect the choice of the specimen preparation method. The aim of the specimen
preparation is limited to the presentation to the spectrometer of a specimen that is
homogeneous (with respect to the XRF technique) and that has a well-dened, at surface
representative for the bulk of the specimen. Mathematical methods usually require
knowledge of all elements in the standard specimens and allow determination of all mea-
surable elements in the unknowns. In practice, trace elements can be neglected in the cal-
culations for the analytes present at higher concentrations, as these trace compounds are
neither subject to an important (and variable) matrix eect nor do they contribute sig-
nicantly to the matrix eect of other elements. Their concentrations are often found by
straightforward linear regression. The mathematical methods are divided in two main ca-
tegories: the fundamental parameter method and the methods using inuence coecients.

2. The Fundamental Parameter Method


a. Introduction
The fundamental parameter method is based on the theory that enables one to calculate the
intensity of uorescent radiation, originating from a specimen of known composition. The
equations used usually consider both primary and secondary ourescence (enhancement).
Copyright 2002 Marcel Dekker, Inc.
Higher-order eects and eects due to scattered radiation are usually neglected. Formulas to
calculate intensities of uorescent radiation were proposed very shortly after the introduc-
tion of the commercial XRF spectrometers, in the early 1950s (Gillam and Heal, 1952). The
equation describing the intensity of the uorescent radiation as a function of specimen
composition, spectrometer conguration, and incident spectrum was derived earlier (see Sec.
II.B) and is repeated here:
lZedge;i " #
X
Ii Jl Pi l Sij l; lj dl 69
j
lmin

where Pi l is the contribution of the primary uorescence caused by incident photons


with wavelength l, and Sij l; lj is the contribution of the secondary uorescence (en-
hancement) by characteristics photons lj which have been excited by primary photons l.
The summation in Eq. (16) or (69) is over all elements j that have characteristic lines that
can excite the analyte i. For each of these elements j, all characteristic lines must be
considered. This is quite simple if none of the L lines or M lines of element j can excite the
analyte. In that case, only the Ka and Kb lines are to be considered. If the L lines of an
element j are energetic enough for enhancement of the analyte, the sheer number of L lines
(e.g., W has more than 20 characteristic L lines that can be considered) would make the
calculation very time-consuming. Therefore, most programs consider only three to ve L
lines for each element. A similar reasoning holds for the M lines.
Application of these formulas was originally limited to the prediction of intensities
for specimens with given composition. The application of those formulas for analysis was
not pursued until the 1960s. The method of analyzing specimens by fundamental para-
meter equations has been developed independently around the same time by Criss and
Birks (1968) and by Shiraiwa and Fujino (1966). Due to the large amount of calculations
involved (especially the integration of the incident spectrum and the calculation of the
contribution of enhancement), these programs initially ran on mainframes and mini
computers. A PC version of the same program was proposed by Criss in 1980 (Criss,
1980a).
The application of a fundamental parameter method for analyzing specimens con-
sists of two steps: calibration and analysis. Both steps will be discussed in more detail in
the following subsections.
b. Calibration
The fundamental parameter equation is used to predict the intensity of characteristic lines
for a composition identical to that of the standard used. If more than one standard spe-
cimen is used, the calculations are repeated for each of the standards. The calculations are
performed using the appropriate geometry (i.e., incidence and take-o angles are taken in
agreement with those of the spectrometer used) and the parameters determining the tube
spectrum (such as the anode material, voltage, thickness of beryllium window, and so on)
correspond to the ones used in the spectrometer for the measurements. The intensities
predicted are (almost always) net intensities, void of background, line overlap, crystal
uorescence, and so on. Hence, the measured intensities must be corrected for such
spectral artifacts. These theoretically predicted intensities are then linked to the actually
measured ones. If only one standard specimen is used, the ratio between the measured
intensity and the calculated intensity is calculated. If more than one standard is used, the
net intensities obtained from the measurements are plotted versus the calculated intensities
and a straight line can be determined for each characteristic line measured. The slope of
Copyright 2002 Marcel Dekker, Inc.
such a line is the proportionality factor between predicted (calculated) and measured in-
tensities. In general, this relationship will be determined more accurately if more standards
are used.
A special case of calibration ensues when one uses pure elements as standards. Di-
viding each of the measured (net) intensities of the corresponding pure element gives the
relative intensity. This relative intensity can be calculated directly by some fundamental
parameter programs. In fact, some programs use equations that express the intensity of
characteristic radiation directly in terms of relative intensity. The relative intensity in this
respect is thus dened as the intensity of the sample, divided by the intensity of the cor-
responding pure element (or compound, if the concentration of the analyte is dened in
compound concentration), under identical conditions for excitation and detection.
Basically, the calibration function, which is determined using the measured in-
tensities of the standards and the calculated intensities, accounts for instrument-related
factors only; matrix eects are accounted for by using the physical theory as described in
the fundamental parameter equation.
The instrument-related parameters for a wavelength-dispersive spectrometer are as
follows:
Collimation
Crystal reectivity
Eciency of detector(s)
Fraction of emergent beam, allowed into the detectors, after Bragg reection (is also
dependent on Bragg angle)
For energy-dispersive spectrometers, the instrument-related parameters are collimation
and detector eciency. The eect of the windows of the detector, the dead layer, and so
forth can also be taken into account.
c. Analysis
Step 1. For every unknown specimen, a rst estimate of the composition is made.
There are several ways to obtain such a rst estimate. They vary from using a simple, xed
composition (e.g., equal to 100%, divided by the number of elements considered; in the
rst estimate, all the concentrations of each of the elements are thus taken equal to one
another) to the composition derived from the measured intensities in combination with the
calibration curves. Using the calibration data, it is possible to estimate for each element
the intensity that would be obtained if the pure elements were measured. These numbers
are then used to divide the intensity of the corresponding element, measured in the un-
known specimen. The resulting fractions are scaled to 100% and used as the rst estimate.
Step 2. For this estimate of composition, the theoretical intensities are calculated.
These are converted to measured intensities, using the calibration data, so that these two
sets of intensities for the same specimen can be compared.
Step 3. The next estimate of composition is obtained based on the dierence between
the measured and calculated intensities. Again, there are dierent methods available:
1. The simplest method is based on linear interpolation. If, for a given element, the
measured intensity is 10% higher than the calculated intensity, the concentration
of that element is increased by 10%.
2. Rather than a linear relationship, some authors (Criss and Birks, 1968) use an
interpolation based on three points. This is done because the relationship
between concentration and intensity is usually non linear over a wider range.
If the specimen is a pseudobinary, hyperbolic relationships have proven to be
Copyright 2002 Marcel Dekker, Inc.
better approximations. For more complex specimens, it still works out quite
well, because the concentrations of the other elements are considered xed at this
stage. The hyperbolic equation requires a minimum of three points for its
parameters to be determined. The points requiring the least additional
calculation time are the following:
The origin (net intensity zero, at concentration zero).
The pure element, W 100%; the corresponding intensity has already been
calculated.
The point W current estimate, the intensity has already been calculated.
These three points allow an hyperbolic relation to be established around the
current estimated composition. From this curve and the measured
intensity, the new concentration estimate is derived. This approach is
repeated for every analyte element. This method usually provides a faster
convergence than using the simple straight line.
3. It is also possible to use gradient methods to determine the next composition
estimate. The formula for the rst derivative, with respect to concentration, of
the fundamental parameter equation have been published (Shiraiwa and Fujino,
1968), but the calculation is cumbersome and time-consuming. Also, the
derivatives can be obtained by a nite-dierence method, where the eect of a
small change in composition on the intensity is observed.

Step 4. The process, starting at Step 2, is now repeated until convergence is obtained.
Dierent convergence criteria exist. The calculation can be terminated if one of the fol-
lowing criteria is satised for all the elements (compounds) concerned:

1. The intensities, calculated in Step 2, do not change from one step to another, by
more than a present level.
2. The intensities, calculated in Step 2 agree, to within a preset level, with the
measured intensities
3. The compositions, calculated in Step 3, do not change from one step to another
by more than a present level (e.g., 0.0005 or 0.0001 by weight fraction).
One or more of these criteria might be incorporated in the program. These criteria,
however, are no guarantee that the nal result is accurate to within the level, specied in
the convergence criteria. Furthermore, especially with convergence criteria based on
concentrations (such as criteria 3), it must be realized that a convergence criterion of
0.0005 is unacceptable when determining elements at levels below 0.0005.
d. Extensions to the Method
More complex scenarios are possible. The most common one includes the calculation
of a set of inuence coecients (based on theoretical calculations) to obtain a com-
position quickly, close to the nal result. Next, the fundamental parameter method is
applied (Rousseau, 1984a). This should yield faster convergence in terms of compu-
tation time, because the calculation by inuence factors of the preliminary composition
is very fast. This method reduces the number of evaluations of the fundamental param-
eter equation.
Also, the dierent programs available dier quite markedly in their treatment of
the intensities of the standards measured. Some programs use a weighting of the stan-
dards, stressing the standard(s) closest (in terms of intensity) to the unknown (Criss,
Copyright 2002 Marcel Dekker, Inc.
1980a). Such programs use a dierent calibration for each unknown specimen. The
unknown specimen dictates which standards will be given a high weighting factor and
which standards will be used with less weighting. Other programs use all standards with
equal weighting.
e. Typical Results
Early results of the fundamental parameter method on stainless steels are given by Criss
and Birks (1968). The average relative dierences between x-ray results and certied values
were about 34%. Later, Criss et al. (1978) reported accuracies of about 1.5% relative for
stainless steels, using a more accurate fundamental parameter program. Typical results for
tool steel alloys are given in Table 3. Often, the fundamental parameter method is con-
sidered to be less accurate than an inuence coecient algorithm. This is caused primarily
by the fact that the fundamental parameter method has extensively been used and
described as a method that allows quantitative analysis with only a few standards. This
is obviously an advantage, but it does not imply that fundamental parameter methods
cannot be used in combination with many standards similar to the unknown. As a matter
of fact, on several occasions and with a variety of matrices, the authors have obtained
results of analysis with an accuracy similar to that of inuence coecient algorithms when
using the same standards in both cases.
f. Factors Aecting Accuracy
The accuracy of the nal results is determined by the following:
The measurement
The specimen preparation
The physical constants used in the fundamental parameter equation
The limited description of the physical processes that are considered in the
fundamental parameter equations
The standard and the calibration
In the following discussion, the eect of measurements and specimen preparation
will not be considered.

Table 3 Analysis of Seven Tool Steels with a Fundamental Parameter Program [XRF11, from
CRISS SOFTWARE, Largo, MD (Criss, 1980)]

Element Minimum conc. (%) Maximum conc. (%) Standard deviation (%)

W 1.8 20.4 0.52


Co 0.0 10.0 0.20
Mn 0.21 0.41 0.01
Cr 2.9 5.0 0.013
Mo 0.2 9.4 0.04
S 0.015 0.029 0.003
P 0.022 0.029 0.003
Si 0.14 0.27 0.03
C 0.65 1.02 0.16
Note: One standard has been used. The minimum and maximum concentrations refer to the minimum and
maximum concentrations in the set of analyzed specimens, respectively. The standard deviation is calculated from
the difference between concentration values found and certied.
Source: Data courtesy of Philips Analytical, Almelo, The Netherlands.

Copyright 2002 Marcel Dekker, Inc.


g. Physical Constants
The physical constants used in the fundamental parameter equations are as follows:

Incidence and exit angles


Spectrum of incident beam
Mass-attenuation coecients
Flourescence yields
Absorption jump ratios
Ratios of intensity of dierent lines within a given series (e.g., Ka=Kb ratio)
Wavelengths (or energies) of absorption edges and emission lines

Incidence and exit angles. The incidence angle in most wavelength-dispersive (WD) and
energy-dispersive (ED) spectrometers is, in fact, dened by a relatively wide cone with a dierent
intensity at the boundaries compared to the center. This incident cone is neglected and the
incident radiation is considered parallel, along a single, xed direction. A similar observation
holds for the exit angle. The eect is far less pronounced if diraction from a plane crystal
surface is used for dispersion, as is done, for example, in most sequential WD spectrometers.
This has been studied to some extent by Muller (1972). To our knowledge, none of the fun-
damental parameter programs available takes this eect into account. Its inuence, however, is,
to some extent, compensated for by calibration with standard specimens.
Spectrum of incident beam. The spectrum of the incident beam from an x-ray tube
spectrum requires more attention. Parts of the primary spectrum might excite an element B
that, in turn, excites element A very eciently. In such cases, this enhancement may make
the intensity of element A sensitive to small errors in the tube spectrum representation,
which would not be compensated for if the pure A was used for calibration. This can arise,
for example, in the analysis of silicazirconia specimens with a Rh tube (Criss, 1980b).
Pure silica is relatively insensitive to the intensity of the characteristic K lines of Rh. In
combination with Zr, however, the situation is dierent. Indeed, the RhK lines are strongly
absorbed by Zr. Zr then emits K and L lines that enhance Si. As a result, the Si intensity is
more sensitive to the RhK lines in SiO2ZrO2 mixtures than it is in pure SiO2. Tube spectra
have been calculated using, for example, the algorithm of Pella et al. (1985).
Mass-attenuation coecients. There are several compilation of mass-attenuation
coecients, published in the literature. A continuing eort to compile the most compre-
hensive table has been undertaken by the National Institute of Standards and Technology
(formerly National Bureau of Standards), Gaitherburg, MD.
When selecting a table of mass-attenuation coecients for use in a fundamental
parameter program, the following question must be addressed: Does the table cover all the
analytical needs? (In practice, does it cover the complete range of interest from the longest
wavelength considered, down to the excitation potential of the tube?)
The analyst should be aware that the use of formulas to generate mass-attenuation
coecients can lead to values that can be signicantly dierent from the corresponding
table values.
Presently, for applications in XRF, the complications of McMaster et al. (1969),
Heinrich (1966), Leroux and Thinh (1977), or Veigele (1974) are most often used. A short
discussion on the agreement between some of these compilations has been presented by
Vrebos and Pella (1988). A more recent compilation has been published by de Boer (1989).
Fluorescence yields. A comprehensive reference to uorescence yields, including
CosterKronig transitions, can be found in the work of Bambynek et al. (1972). (see
also Chapter 1, and Appendix VI).
Copyright 2002 Marcel Dekker, Inc.
Absorption jump ratios. These can be derived from the tables of attenuation coef-
cients.
Ratios of dierent uorescent lines within a family. Data for the K spectra can be
found in the work by Venugopalo Rao et al. (1972) (see also Chapter 1).
Wavelengths of absorption edges and emission lines. A comprehensive table was
published by Bearden (1967) and is also presented in the appendices to Chapter 1. Because
attenuation coecients are wavelength dependent, an error in a wavelength of any char-
acteristic line will automatically lead to a bias in the corresponding attenuation coecients.
h. Limited Physical Processes Considered
The fundamental parameter equation [Eq. (16)] does not consider all physical processes in
the specimens. Three of the most obvious that are missing are described here.
Tertiary uorescence. Although the formula for tertiary uorescence has been de-
rived by, for example, Shiraiwa and Fujino (1966) and Pollai and Ebel (1971), it is not
included in most fundamental parameter programs. Usually, the tertiary uorescence eect
is considered small enough to be negligible. Shiraiwa and Fujino (1967, 1974) have pre-
sented data showing a maximum contribution of tertiary uorescence of about 3% relative
to the total intensity of Cr in FeCrNi specimens. Therefore, even in FeCrNi specimens
whose characteristic lines and absorption edges are ideally positioned relative to one an-
other to favor enhancement, the eect of tertiary uorescence is quite limited. Higher-order
enhancement is also possible, but it is even less pronounced than tertiary uorescence.
Scatter. Other processes not considered in most of the fundamental parameter
methods are coherent and incoherent scatter of both the primary spectrum and the
uorescent lines. This is usually justied by pointing out that the photoelectric eect is, by
far, the major contribution to the total absorption. It is believed that the contribution by
scattered photons to the excitation of characteristic photons is negligible. However, in
some cases the scattered primary spectrum may have a considerable inuence, as illu-
strated earlier in this chapter. The equations describing the contribution of scatter to
uorescent intensity have been derived by Pollai et al. (1971). These equations have ob-
viously many similarities to those for secondary uorescence.
Photoelectrons. The processes that are probably the must unknown in the funda-
mental parameter method are related to the contributions of the photoelectrons and of the
Auger electrons that are produced as a result of absorption of the primary and uorescent
x-ray photons. These electrons have sucient energy to excite other atoms and thus create
additional uorescence. This is especially important in the case of low-atomic-number
elements, as has been described by Mantler (1993) and has been illustrated, for example,
by Kaufmann et al. (1994).
i. Standards and Calibration
The use of good standards (similar to the unknown) will almost always lead to more
accurate results, compared to a situation where the standards used have a widely dierent
composition from that of the unknown. This is because most of the uncertainties, caused
by inaccuracies in the physical constants, cancel. The degree of similarity between stan-
dards and unknown has an important eect on the accuracy of the analysis.

3. Influence Coefficient Algorithms


Another class of mathematical methods calculates the matrix eect by means of coe-
cients, rather than by evaluating the fundamental parameter equation for each unknown.
It will be shown that these coecients can also be calculated from theory, using funda-
Copyright 2002 Marcel Dekker, Inc.
mental parameters. Many of such inuence coecient algorithms have been proposed and
they have been divided and subdivided in dierent ways (Lachance, 1979). It is not the
intention to discuss all of the algorithms here; only a few selected ones will be discussed.
This selection is based on the popularity of the methods and=or on some interesting
characteristics of their underlying theory. Some of these algorithms use only one single
coecient per interfering element; others use more than one. The distinction used here,
however, depends on whether the inuence coecients is considered to be a constant for a
given application or whether the value of the coecient varies with composition. The
latter methods will be discussed in Sec. V.5. Only two algorithms that use constant in-
uence coecients will be discussed here: the LachanceTraill and the de Jongh algo-
rithms. The practical application of the resulting equations (i.e., calibration and analysis)
will be treated separately in Sec. V.7. One has to emphasize that the approach based on
constant inuence coecients has many common aspects with all the inuence coecient
algorithms discussed in Secs. V.4 and V.5.
All of the inuence coecient models express the total matrix eect Mi for a binary
mixture ij as follows:
Mi 1 mij Wj 70
where mij indicates the true binary inuence coecient describing the matrix eect of j on
the analyte i in binaries ij. More generally,
2 3
6 X
n 7
Mi 6
41 mij Wj 7
5 71
j1
j6e

for a multielement specimen, with n being the total number of elements or compounds. In
most of the inuence coecient algorithms, one element is eliminated from the summation
{i.e., one inuence coecient is used when dealing with binaries, as in Eq. (70), and n1
coecients deal with a specimen consisting of n compounds [Eq. (71)]}. In Eq. (71) this is
explicitly indicated by the j 6 e under the summation sign. The eliminated compound e can
be any of the ones present in the specimens; however, most authors eliminate the analyte.
The expression for Mi is then used as follows:
2 3
6 X
n 7
Wi R i 6
4 1 mij Wj 7
5 72
j1
j6e

which links the relative intensity and the inuence coecients to the composition of the
specimen. The relative intensity Ri for a given analyte is dened as the ratio of the net
measured intensity Ii in the specimen and the intensity that would have been measured on
the pure analyte Ii under identical conditions:
Ii
Ri 73
Ii
In practice, the relative intensity is often derived indirectly from measurements on stan-
dards, and the pure element (or compound) is not required. Equation (56) can be rewritten
in terms of Ri :
Wi R i Mi 74

Copyright 2002 Marcel Dekker, Inc.


By denition,
Wi
Mi 75
Ri
The ratio of the weight fraction of the analyte Wi and its relative intensity Ri is the
matrix eect Mi . When the analyte radiation is absorbed (or when the absorption eects
are dominating over enhancement), Mi is larger than 1. On the other hand, when en-
hancement is dominant, Mi is smaller than 1. Also, Mi is smaller than 1 in the absence
of enhancement, but when the absorption by the analyte is signicantly higher than that
of the matrix elements, as indicated by curve 3 in Figure 6. One of the consequences of
Eq. (74) is that for the pure analyte Wi 1, Ri 1 and, thus, Mi is also equal to 1. This
implies that the matrix eect as introduced here should be viewed as relative to the pure
element and not in absolute terms. Even in the pure element specimen, x-ray photons are
subject to matrix eect. It is possible, even in the pure analyte, to be subjected to en-
hancement eects. This is, for example, the case when L lines are analyzed if the K lines
of the same analyte are also excited. Furthermore, the L lines of elements with large
atomic numbers can be uoresced by other L lines, as indicated in Table 1. However, it
is customary to refer to the situation as a situation without matrix eect. Also, if
Mi 1, Eq. (74) reduces to
Wi R i 76
In other words, in the absence of matrix eect, the concentration of the analyte is equal to
the relative intensity. For a specimen containing, for example, 25% (by weight) of the
analyte, an intensity will then be measured that is 25% of that of the pure analyte.
Comparing Eqs. (56) and (74) and considering the denition of the relative intensity
given by Eq. (73) yields
1
Ki
Ii
in other words, the sensitivity is the reciprocal of the intensity of the pure analyte. So, the
intensity of the pure analyte can also be obtained without making measurements for a
specimen of the pure element. It can be obtained from the slope of the calibration line or
even from the measurement on a single specimen:

Ii M i
Ii 77
Wi
where Mi is calculated and Ii is measured. Mi can be calculated using Eq. (75), where Ri is
calculated from theory for the standard specimen of known composition. Therefore, al-
though many inuence coecient algorithms are presented using the format of Eq. (72),
involving the relative intensity, there is no real need to perform measurements of the pure
analyte, as Eq. (72) can be written as
2 3
6 X
n 7
Wi K i I i 6
41 mij Wij 7
5 78
j1
j6e

where Ki is then determined during the calibration phase. Furthermore, if the background
is not subtracted, Eq. (78) can be written as
Copyright 2002 Marcel Dekker, Inc.
2 3
6 X
n 7
Wi Bi Ki Ii 6
41 mij Wij 7
5 79
j1
j6e

where Bi is the background expressed as a concentration equivalent. The constants Ki and


Bi are then determined during regression analysis.

4. Algorithms with Constant Coefficients


a. The LachanceTraill Algorithm
Formulation. In 1966, Lachance and Traill proposed a correction algorithm based
on inuence coecients (Lachance and Traill, 1966). The equations are, for a ternary
consisting of the elements (or compounds) A, B, and C:
WA RA 1 aAB WB aAC WC 80a
WB RB 1 aBA WA aBC WC 80b
WC RC 1 aCA WA aCB WB 80c
RA , RB , RC are the relative intensities of A, B, and C, respectively. The coecients aAB ,
aAC , aCA , and so forth are called inuence coecients. A more general notation of the
LachanceTraill algorithm is, for analyte i,
2 3
6 X
n 7
Wi R i 6
4 1 aij Wj 7
5 81
j1
j6i

where the summation covers all n elements (or compounds) in the specimen, except the
analyte itself. Hence, there are n1 terms in the summation. This is common to all cur-
rently used algorithms. Most of the algorithms, developed earlier [such as Shermans
(1953) and Beattie and Brisseys (1954)], used to have n terms, rather than n1, for spe-
cimens with n elements.
Equations (80a), (80b), and (80c) are linear equations in the concentrations of the
elements WA , WB , and WC , respectively. Note that there are only two coecients for each
analyte element. Consider, for example, the rst equation of the set 80, namely Eq. (80a):
Element A is the analyte and its concentration is equal to the relative intensity RA , mul-
tiplied by the matrix correction factor (1 aAB WB aAC WC ). This matrix correction
factor has only two coecients: one (aAB ) to describe the eect of element B on the in-
tensity of A and, similarly, one to describe the eect of element C on the intensity of A.
The value of the coecient aAA , which would correct for the eect of A on its own in-
tensity (sometimesbut incorrectlyreferred to as self-absorption) is zero. Similarly, aBB
and aCC are also zero. The eect of A on A, however, is taken into account, as will be
shown in the next subsection.
Calculation of the coecients. Lachance and Traill also showed that the inuence
coecients, ij can be calculated for monochromatic excitation by photons with wave-
length 0 (assuming absorption only) from the expression
mj l0 cscc0 mj li cscc00
aij 1 82
mi l0 cscc0 mi li cscc00

Copyright 2002 Marcel Dekker, Inc.


When secondary uorescence (enhancement) is involved, the coecients are calculated in
the same way. Thus, enhancement is being treated as negative absorption. This assump-
tion is not valid when enhancement is quite severe. Dierences in primary absorption may
easily be confused with enhancement (see Fig. 6). From Eq. (82), it follows clearly that aii
is always zero. Therefore, it is not included in the summation of Eq. (81). Also, from
Eq. (82), it follows that the value of the coecients for the LachanceTraill algorithm
cannot be less than 1. It must be stressed that Eq. (82) is only valid strictly for mono-
chromatic excitation and for those analytes that are only subject to absorption (no en-
hancement). In this case, the inuence coecient is concentration independent: It is a
constant, even for the complete concentration range from 0% to 100%. It does, however,
depend on parameters, such as the wavelength of the primary photons and the incidence
and exit angles. In all other cases (polychromatic excitation and=or enhancement),
Eq. (82), strictu sensu, cannot be used. A polychromatic beam (from, e.g., an x-ray tube)
can be replaced by a monochromatic one, by resorting to the eective wavelength. The
eective wavelength, however, is composition dependent (see Sec. II.C). The value of the
coecients, calculated using Eq. (82), is also dependent on composition, although Wi nor
Wj gure explicitly in Eq. (82). If enhancement is dominant, another method must be
applied to calculate the coecients.
Inuence coecients for the algorithm of, for example, LachanceTraill can also be
calculated based on actual measurements. Rewriting

Wi Ri 1 aij Wj 83
to

Wi =Ri  1
aij 84
Wj

yields an expression that can be used to obtain aij , based on the composition of the
binary and the relative intensity Ri . The drawbacks associated with this method are as
follows:
1. The calculation of Ri requires the measurements of the intensity on the pure i
(element or compound). This could lead to large errors if the intensity of i in the
binary is much lower than that of the pure, due to, for example, nonlinearity of
the detectors.
2. The pure elements (or compounds) are not always easily available or could be
unsuitable to present to the spectrometer (e.g., pure Na or Tl).
3. Equation (84) is very prone to error propagation when Wi is close to 1. The
numerator is then a dierence between two quantities of similar magnitude, and
the denominator is then close to zero, magnifying the errors.
4. Also, the availability of suitable binary specimens can present problems: Some
alloys tend to segregate and homogeneous specimens are then dicult to
obtain.
The coecients aij can also be calculated from theory: Calculate Ri for the binary
with composition Wi ; Wj rather than obtain it from measurements and substitute in Eq.
(84). This method eliminates drawbacks 1, 2, and 4. However a better methodwithout
the problem associated to error propagationis to use Eq. (75) directly with the values for
Ri and Wi . Lachance (1988) has also presented methods to calculate the values of the
coecients from theory.
Copyright 2002 Marcel Dekker, Inc.
The LachanceTraill algorithm assumes the following:
1. The inuence coecients can be treated as constants, independent of
concentration; this limits the concentration range in cases where the matrix
eects change considerably with composition.
2. The inuence coecients are invariant to the presence and nature of other
matrix elements. So aFeCr , determined for use in FeCrNi ternary specimens, is
the same as aFeCr in FeCrMoWTa or FeCr specimens.
b. The de Jongh Algorithm
Formulation. In 1973, de Jongh proposed an inuence coecient algorithm (de
Jongh, 1973), based on fundamental parameter calculations. The general formulation of
his equation is
" #
X
n
Wi E i R i 1 aij Wj 85
j1
j6e

where Ei is a proportionality constant (and is usually determined during the calibration).


The summation covers n1 elements (as is the case with the algorithm of Lachance and
Traill), but the eliminated element, e, is the same for all equations. If, for a ternary spe-
cimen, element C is eliminated, the following equations are obtained:

WA EA RA 1 aAA WA aAB WB 86a


WB EB RB 1 aBA WA aBB WB 86b
WC EC RC 1 aCA WA aCB WB 86c

Note that in order to obtain the concentration of elements A and B, the concentration of
C, WC , is not required. This is dierent from Lachance and Traills algorithm: In order to
calculate the concentration of A, using Eq. (80a), the concentrations of both B and C are
required. If the user is not really interested in element C (e.g., element C is iron in stainless
steels), Eq. (86c) need not be considered and the analysis of the ternary specimen can be
done by measuring RA and RB and solving Eqs. (86a) and (86b).
Calculation of the coecients. de Jongh also presented a method to calculate the
coecients from theory. The basis is an approximation of Wi =Ri by a Taylor series
around an average composition:
Wi
Ei di1 DW1 di2 DW2    din DWn 87
Ri
where Ei is a constant given by
 
Wi
Ei 88
Ri average
and
DWi Wi  Wi;average 89
dij are the partial derivatives of Wi =Ri with respect to concentration:

@Wi =Ri
dij 90
@Wj

Copyright 2002 Marcel Dekker, Inc.


In practice, these derivatives are calculated as nite dierences. Wi =Ri is calculated
for a specimen with the average composition W1;average ,W2;average , . . . ,Wn;average (symbol
Wi =Ri average . Then, the concentration of each element j in turn is increased by a small
amount [e.g., 0.1% (0.001 in weight fraction)] and Wi =Ri is calculated for that composi-
tion (symbol [Wi =Ri Wj 0:001 ). Substituting in Eq. (90) yields

@Wi =Ri Wi =Ri Wj 0:001  Wi =Ri average


dij 91
@Wj 0:001
This process is repeated for each of the elements j to calculate all the coecients for
analyte i. This is also repeated for the other analyte elements. The coecients, calculated
from Eq. (91), can be used in Eq. (87) for analysis. Equation (87), however, has n factors
rather than n1 and uses DWj rather than Wj . Using the fact that
X
n
DWj 0 92
j1

or
DWe DW1  DW2      DWn 93
one element e can be eliminated. The resulting equation is similar to Eq. (87), but has only
n1 terms:

Wi
Ei bi1 DW1 bi2 DW2    bin DWn 94
Ri
with
bi1 di1  die 95
for all bij except bie , which is equal to zero. Equation (94) has n1 terms, but they are still
in DW, rather than W. Transformation of DW to W is done by substituting Eq. (89) in
Eq. (94):
Wi X n X n
Ei  bij Wj;average bij Wj 96
Ri j6e j6e

which can be rearranged to Eq. (85) with


bij
aij Pn 97
Ei  j6e bij Wj;average

Combining Eq. (85) with Eqs. (75) and (88) yields


2 3
6 X
n 7
Wi Mi;average Ri 6
41 aij Wj 7
5 98
j1
j6e

indicating that the weight fraction of the analyte is calculated from its relative intensity, a
matrix correction term, and the matrix correction term for the average composition (which
for a given composition is a constant). The value for Mi,average can be calculated using the
inuence coecients calculated and taking the composition (Wi, Wj) equal to the average
Copyright 2002 Marcel Dekker, Inc.
Table 4 Analysis of Stainless Steels with Theoretical Inuence Coefcients (de Jongh)

Element Min.a (%) Max.a (%) Std. dev.b (%)

Mn 0.64 1.47 0.015


Cr 12.40 25.83 0.06
Ni 6.16 20.70 0.06
a
Min., Max.: minimum and maximum concentration in the set of analyzed specimens, respectively.
b
Std. dev.: standard deviation, calculated from the difference between concentration values found
and certied.
Source: Data courtesy of Philips Analytical, Almelo, The Netherlands.

composition. This equation is very similar at rst sight to the LachanceTraill equation
[Eq. (81)], except for the term Mi,average.
Typical results. Tables with de Jonghs coecients have been used for a wide
variety of materials, including high-temperature alloys, brass, solders, cements, glasses,
and so forth. An example for stainless steels is given in Table 4. Results are shown for
Mn, Ni, and Cr only. For these analytes, the matrix eects are the most important.
Other elements, such as Si, P, S, and C, are also present at trace level. The coecients are
calculated at a given composition [see Eq. (91)]. The practical range of concentration over
which these coecients yield accurate results varies from 5% to 15% in alloys to the whole
range from 0% to 100% in fused oxide specimens.
Comparison between the algorithms of de Jongh and LachanceTraill. The following
points can be noted:
1. The basis of the de Jongh algorithm is a Taylor series expansion, around an
average (or reference) composition. The values of the coecients calculated
depend on this composition.
2. De Jongh can eliminate any element; Lachance and Traill eliminate the analyte
itself: aij is zero. De Jongh eliminates (i.e., xes the coecient to zero) the same
element for all analytes. Eliminating the base material (e.g., iron in steels) or the
loss on ignition (for beads) generally leads to smaller numerical values for the
coecients and avoids the necessity to determine all elements.
3. De Jonghs coecients are calculated at a given reference composition. They are
composition dependent and take into account all elements present. A coecient
aij represents the eect of element j on the element i in the presence of all other
elements: They are multielement coecients rather than binary coecients. This
is seen in Table 5, where the values of the coecients aCrCr and aCrNi are shown
for several dierent specimens, but with identical concentrations for the analyte

Table 5 Values for Inuence Coefcients aCrCr and aCrNi, Calculated According to the Algorithm
of de Jongh, for Selected Specimens

WCr WFe WNi WTi WW aCrCr aCrNi

0.18 Bal. 0.08 0 0 2.06 0.750


0.18 Bal. 0.08 0 0.08 2.30 0.817
0.18 Bal. 0.08 0.08 0 2.49 0.868
Note the variation of the values of the coefcients, depending on the presence of Ti or W.

Copyright 2002 Marcel Dekker, Inc.


and the interferent. In all cases, Fe has been eliminated and W and Ti are either
present at 0.08 or not.
4. The calculation of the coecients is based on theory, treating both absorption
and enhancement eects. Hence, the coecients are susceptible to the errors,
described earlier (Sec. V.D.2). However, the calculation of the coecients
involves a division of the matrix correction terms for the slightly aected
composition by the corresponding term of the reference composition [Eq. (97)].
This compensates to a large degree for some of the biases introduced by the
fundamental parameters.
5. For a specimen, containing n elements, there are n equations (one for each of the
elements) if one uses the LachanceTraill algorithm. Each of these equations has
n1 inuence coecients; the coecient of the analyte in each of the equations
has been set to 0. For the same specimen, de Jongh only requires n1 equations,
using n1 coecients per equation. One element has been eliminated
throughout. This element (or compound) is usually one that is of no or little
interest to the analyst (e.g., iron in steels or the loss on ignition in fused beads)
(de Jongh, 1979). The nth equation (for the eliminated element) can also be
written: Its form is identical to the others, it has also n1 terms, and the
coecients can be calculated following exactly the same procedure as for the
other coecients.
6. The coecients in the LachanceTraill equation have been calculated
empirically, using measured data from many more standards than used in the
de Jonghs algorithm, which always used theoretical coecients. These
coecients could be obtained from Philips, eliminating the need for a large
computer at each users site. With increased computer capabilities at the
disposition of every analyst, the calculation of the coecients from theory has
now been feasible for some years. It is possible to calculate the coecients for
the LachanceTraill algorithm from theory as well (Lachance, 1988).

5. Algorithms withVariable Coefficients


a. Introduction
Both algorithms discussed so far use a single, constant coecient for each (except one)
interfering element: The expression from Lachance and Traill uses coecients, expressing
the eect of one element on the characteristic intensity of the analyte, relative to the
analyte, ignoring all other elements. Such coecients are therefore referred to as binary
coecients. The algorithm of de Jongh calculates multielement inuence coecients ef-
fectively. Such coecients predict the eect of one element on the intensity of another in a
given matrix.
This distinction between binary and multielement coecients can clearly be seen in,
for example, a ternary specimen. Algorithms based on binary coecients add interelement
eects from each of the constituent elements. They calculate the matrix eect using in-
uence coecients that have been calculated for binaries. Assume a NiFeCr specimen.
The total matrix eect of Cr is accounted for using a coecient expressing the inuence of
Ni on Cr (aCrNi) and a similar coecient for Fe on Cr (aCrFe). Both of these coecients are
calculated for the corresponding binaries (NiCr and FeCr, respectively). In a ternary
specimen (e.g., NiFeCr), however, the eect of, for example, Ni on Cr is aected by the
presence of Fe (and Ni similarly aects the eect of Fe on Cr). This eect is called the
crossed eect and will be discussed in a following section.
Copyright 2002 Marcel Dekker, Inc.
The original expression presented by Lachance and Traill (1966) to calculate the
inuence coecients required the use of monochromatic excitation. An equivalent
wavelength was used to calculate the coecients when the polychromatic excitation is
applied. The equivalent wavelength, however, has been shown to vary with composition,
as treated earlier in this chapter. The theoretical coecients, calculated according to
de Jongh, are also composition dependent, as the reference composition is used explicitly
in the calculations.
This variation is due to the fact that the composition of the matrix varies con-
siderably if analysis is required over a wider range of concentrations. This has been re-
cognized early in the development of inuence coecient algorithms, and many dierent
algorithms with variable coecients have been proposed. A variable inuence coecient
in this respect is an inuence coecient that varies explicitly with concentration of one or
more components in the specimen. Some of these will be discussed in the subsequent
subsections.
b. The ClaisseQuintin Algorithm
Formulation. Claisse and Quintin (1967) extended Lachance and Traills algorithm
by considering a polychromatic primary beam. The resulting equation for WA can be
expressed as
" #
Xn X
n X
n Xn
WA RA 1 aAj Wj aAjj Wj
2
aA j k W j W k 99
j6A j6A j6A k6A;k>j

where the summation over j has n1 terms (all n elements, except i), and the summation
over k has (n2)=2 terms (all n elements, except the analyte i and element j; furthermore, if
aAj k is used, then aAkj is not). For a binary specimen, Eq. (99) reduces to
WA RA 1 a0AB WB 100
with
a0AB aAB aABB WB 101
clearly showing that the inuence coecient a0AB varies linearly with composition (i.e.,
WB ). For binaries, WA 1  WB ; hence, Eq. (101) can also be rearranged to
a0AB aAB aABA WA 102
Equations (101) and (102) are, at least theoretically, identical. It has been shown, however,
that Eq. (102) is preferable to Eq. (101) if specimens with more than two elements (or
compounds) are analyzed (Lachance and Claisse, 1980). This will be discussed in more
detail in Sec. V.D.6. Note that the value of aAB in Eq. (101) is dierent from its value in
Eq. (102).
Cross-product coecients. For a ternary specimen, the ClaisseQuintin algorithm
can be written
WA RA 1 aAB WB aABB W2B aAC WC aACC W2C aABC WB WC 103
The terms
aAB WB aABB W2B
and
aAC WC aACC W2C

Copyright 2002 Marcel Dekker, Inc.


are the matrix corrections, due to B and C, respectively. The term aABC WB WC corrects for
the simultaneous presence of both B and C and is referred to as a cross-product coecient.
Calculation of the coecients. Claisse and Quintin (1967) also published methods to
calculate the coecients from measurements on binary and ternary mixtures or from theory.
These methods, however, are now generally superseded by theoretical calculations, such as
discussed in Sec. V.C.4.c and V.C.4.d. Rousseau (1984b) has presented a calculation method
for the coecients in the ClaisseQuintin algorithm, and Wadleigh (1987) has commented
upon this approach.
c. The RasberryHeinrich Algorithm
Following a systematic study of the FeCrNi ternary system, Rasberry and Heinrich
(1974) concluded that the two phenomenaabsorption and enhancementare to be de-
scribed by two dierent equations. They introduced the following algorithm:
" #
X n X n
Bik
Wi R i 1 Aij Wj Wk 104
j6i k6i
1 Wi

where only one coecient is used for each interfering element. The coecients Aij are used
for cases where absorption is the dominant eect. In this case, the coecient Bik is taken
equal to zero. If, for a given analyte, all Bik coecients are zero, Eq. (104) reduces to the
LachanceTraill expression. When enhancement by element k dominates, a Bik coecient
is used. The corresponding Aij coecient is then taken equal to zero. Hence, the total
number of terms in both summations is n  1.
The correction factor for enhancement by element k can be rewritten as
Bik
aik 105
1 Wi
showing that aik varies with concentration in a nonlinear fashion. The algorithm is very
popular when analyzing stainless steels and steels in general.
Among the disadvantages of the RasberryHeinrich algorithm are the following:

1. It is not always clear which interfering elements should be assigned a B


coecient and which one an A. In PbSn alloys, the SnLa line is uoresced
(enhanced) by both SnK and PbL lines. Yet, the calibration curve for SnLa
clearly shows that absorption is dominant (Fig. 11).
2. Furthermore, Eq. (105) suggests that the value of Bik at Wi 0 is twice the value
at the other end of the calibration range when Wi 1. This is not generally
valid. Mainardi and co-workers (1982) have therefore suggested replacing the 1
in the denominator by an additional coecient.
3. Rasberry and Heinrich did not publish a method for calculating the coecients
from theory.

Some of the disadvantages of calculating empirical coecients have been discussed


in Sec. V.D.4.a. For these reasons, the RasberryHeinrich algorithm is not generally
applicable. However, the concept of a hyperbolically varying inuence coecient has been
incorporated in the three-coecient algorithm of Lachance.
d. The Three-Coecient Algorithm of Lachance
Formulation. In 1981, Lachance (1981) proposed a new approximation to the
binary inuence coecient B
ij given by

Copyright 2002 Marcel Dekker, Inc.


Figure 11 Calibration curves for PbLa (solid line), SnKa (dashed line), and SnLa (dotted line) in
PbSn binaries. The SnLa is apparently dominated by absorption, rather than by enhancement of
SnK and PbL lines.

aij2 Wm
mij aBij aij1 106
1 aij3 1  Wm
with

Wm 1  Wi 107
Wm is the concentration of all matrix elements. It has been shown by Lachance and Claisse
(1980), as well as by Tertian (1976), that variable binary coecients must be expressed in
terms of Wm (or 1Wi). For binary specimens, Eq. (106) can be rewritten using Wj for Wm
and Wi for (1Wm):
aij2 Wj
mij aBij aij1 108
1 aij3 Wi
For specimens with more than two compounds, however, the dierence between Eqs. (106)
and (108) becomes clear. The value for the inuence coecient mij is approximated over
the complete concentration range for the binary by the function in Eq. (106), which relies
on three coecients only. The excellent agreement between the true inuence coecient
mij and the approximation of Eq. (106) is shown in Figure 12 for Fe in FeNi (severe
enhancement) and for Fe in FeCr (pronounced absorption).
For multielement specimens, cross-product coecients aijk are used to correct for the
crossed eect, similar to Eq. (99). The general equation for a multielement specimen is
" #
X n   X n X n
aij2 Wm
Wi R i 1 aij1 Wj aijk Wj Wk 109
j6i
1 aij3 1  Wm j6i k6i;k>j

Copyright 2002 Marcel Dekker, Inc.


Figure 12 The binary influence coefficient mFeNi in FeNi binary systems (j, enhancement, top)
and mFeCr in FeCr (j, absorption, bottom) and the approximation by the hyperbolic three-
coefficient algorithm for Lachance (COLA). Note the excellent agreement in both cases. Conditions:
W tube at 45 kV, in a spectrometer with an incidence angle of 63 and 33 take-off angle.

where the summation over j has n  1 terms (all n elements, except i) and the summation
over k has (n  2)=2 terms (all n elements, except the analyte i and element j; furthermore if
aijk is used, then aikj is not).
Vrebos and Helsen (1986) have published some data on this algorithm, clearly
showing the accuracy of the algorithm, using theoretically calculated intensities. The use of
Copyright 2002 Marcel Dekker, Inc.
Table 6 Composition of the Specimens Used for the Calculations of the
Coefcients for Lachances Three-Coefcient Algorithm, in Weight Fraction

Specimen No. Wi Wj Wk

1 0.999 0.001 0.0


2 0.001 0.999 0.0
3 0.5 0.5 0.0
4 0.999 0.0 0.001
5 0.001 0.0 0.999
6 0.5 0.0 0.5
7 0.30 0.35 0.35

theoretically calculated intensities has the advantage that it avoids errors due to specimen
preparation and measurement errors associated with actual measured data. Pella and co-
workers (1986) have presented a comparison of the algorithm with several others and with
a fundamental parameter method using experimental data.
Calculation of the coecients. The coecients aij1 , aij2 , and aij3 are calculated using
fundamental parameters at three binaries i  j. The cross-product coecients are calculated
from a ternary. The compositions of the specimens concerned are listed in Table 6. The
specimens referred to in Table 6 are hypothetical specimens. The intensities are calcu-
lated from fundamental parameters and require no actual measurements on real specimens.
Step 1. Calculate the relative intensity Ri for the rst composition in Table 6. If the
analysis of interest has more than three elements, then the system is divided in combi-
nations of three elements i, j, k at a time. The analyte is element i, and j and k are two
interfering elements. If the system considered is with compound phases, such as oxides,
then the compositions in Table 6 are assumed to be for the oxides.
Step 2. Using Eq. (84), the corresponding inuence coecient aBij can be calculated.
Step 3. For this composition, Wm 1  Wi Wj 0:001, which is small enough to
be considered zero. Hence, Eq. (106) reduces to
aBij aij1 110
aBij has been calculated in Step 2, so aij1 can be computed.
Step 4. Calculate the intensity for the second composition of Table 6 and use Eq. (84)
to calculate aBij . In most cases, this value will be dierent from the one found in Step 2
because the compositions involved are dierent.
Step 5. 1  Wm Wi 0:001 is small enough to be considered zero; hence, Eq. (106)
reduces to
aBij aij1 aij2 111
aij1 and aBij are known so aij2 can be calculated.
Step 6. Calculate the intensity for the third composition of Table 6 and use Eq. (84)
to calculate aBij . In most cases, this value will be dierent from the one found in Step 2 or 4
because the compositions involved are dierent.
Step 7. Using Wm 1  Wi 0:5 Wi , Eq. (106) reduces to
aij2 0:5
aBij aij1 112
1 aij3 0:5

Copyright 2002 Marcel Dekker, Inc.


which can be rearranged to
aij2
aij3 B 2 113
aij  aij1
all coecients on the right-hand side are known, so aij3 can be calculated.
Step 8. Repeat Steps 17 for Wi and Wk , to compute the coecients aik1 , aik2 ,
and aik3 .
Step 9. Calculate the intensity Ri for the ternary (composition 7 in Table 6). Cal-
culate aBij and aBik , using Eq. (106) and the coecients determined earlier.
Step 10. Eq. (109) combined with Eq. (106) for a ternary specimen ijk reduces to

Wi Ri 1 aBij Wj aBik Wk aijk Wj Wk 114


which can be rearranged to solve for aijk:

Wi =Ri  1  aBij Wj  aBik Wk


aijk 115
Wj Wk
All variables on the right-hand side of Equation (115) are known, so aijk can be calculated.
Step 11. Repeat for other interfering elements ( j and k) and repeat for other analytes i.
Tao et al. (1985) published a complete computer program illustrating the method
and allowing the calculation of the coecients and analysis of unknowns. This program
suers from an oversimplication in that only the measured analytical lines are considered
for enhancement. This would generate erroneous values for the coecients in cases such as
CuZn alloys, where the ZnKa line cannot uoresce the K shell of Cu, but the ZnKb can
do so. If the ZnKa line is used for analysis, the eect of the ZnKb line (enhancement of Cu)
is not taken into account by the program. In practice, however, the only lines that are
considered for enhancement are the characteristic lines used for the analysis of the other
elements. This can be seen, for example, by calculating the coecients twice for FeSi with
identical conditions; once indicating FeKa and SiKa lines are to be used and once in-
dicating that the FeLa be used (Table 7). The value of the coecients for Fe will change;
this is quite obvious because the magnitude and the sort of the matrix eects on the FeKa
and the FeLa characteristic lines are quite dierent. The value for Si, however, should not
change: in both cases, the same elements are present, and using the same excitation
conditions, there is no reason why the coecients should be dierent as the matrix eects
are the same.
e. The Algorithm of Rousseau
Formulation. Rousseau and Claisse (1974) used a linear relationship to approx-
imate the binary coecients and cross-product coecients:
" #
Xn X
n Xn
Wi R i 1 aij1 aij2 Wm Wj aijk Wj Wk 116
j6i j6i k6i;k>j

The binary inuence coecients are thus approximated by


mij aBij aij1 aij2 Wm 117
This model can be used as a stand-alone inuence coecient algorithm, but it has also
been proposed as the starting point for a fundamental parameter algorithm (Rousseau,
1984a). The degree of agreement between the inuence coecient mij and the approxi-
Copyright 2002 Marcel Dekker, Inc.
Table 7 Values for the Coefcients for Eq. (109) for Si in FeSi Binaries

SiKa(FeKa) SiKa(FeLa)

aij1 5.396 6.284


aij2 1.890 0.015
aij3 0.409 0.846
Note: In the second column, Fe is measured using the Ka line and in the last
column, the La line is used. Conditions: W tube at 45 kV, in a spectrometer with an
incidence angle of 63 and a 33 take-off angle.

mation is shown in Figure 13 for the FeNi and the FeCr binaries. The agreement for the
straight line is obviously not as good as with the COLA algorithm, especially in those cases
where the value of the true inuence coecient varies markedly, as is the case for Fe in
FeCr (absorption). Equation (117) has been compared to the three-coecient algorithm of
Lachance by Vrebos and Helsen (1986). They show that the accuracy is somewhat less
than for Lachances method, but for most practical purposes, the Rousseau algorithm
should give acceptable results.
Calculation of the coecients. Rousseau has shown that the fundamental para-
meter equation can be rearranged to
" #
Xn
Wi R i 1 aij Wj 118
j61

and he also proposed a method to calculate the a coecients directly from fundamental
parameters, without calculating the intensity rst (Rousseau, 1984a). As a matter of fact,
Rousseau rst calculates the coecients for a given composition and then calculates the
intensity, using Eq. (118). The coecients in Eq. (116) are calculated in a way very similar
to the method described in Sec. V.D.5.d. The compositions involved are given in Table 8.
The specimens referred to in Table 8 are hypothetical specimens. The intensity is cal-
culated from fundamental parameters and requires no actual measurements on real spe-
cimens. For the rst two binaries of Table 8, the inuence coecient is calculated [symbol
aij(0.20,0.80) and aij(0.80,0.20), respectively]. Then the corresponding values are sub-
stituted in Eq. (117):
aij 0:20; 0:80 aij1 aij2 0:80 119a

aij 0:80; 0:20 aij1 aij2 0:20 119b


These equations can be solved for aij1 and aij2 . Similarly, using compositions 3 and 4 from
Table 8, the corresponding coecients for ik can be calculated. The cross-product
coecients aijk are calculated using Eq. (115).

6. Specimens with More thanTwo Compounds


The methods described by Lachance [Eq. (106)] and Rousseau [Eq. (117)] explicitly de-
scribe algorithms to calculate the value of the binary inuence coecient by a rather
simple, hyperbolic or linear, relationship. Combining these binary coecients to describe
the matrix eect for specimens with more than two elements (or compounds) is described
in this subsection. The ternary system FeNiCr is taken here as an example. Figure 14 gives
Copyright 2002 Marcel Dekker, Inc.
Figure 13 The binary influence coefficient mFeNi in FeNi binary systems (j, enhancement, top)
and mFeCr in FeCr (j, absorption, bottom) and the approximation by the straight line as suggested
by Rousseau and Claisse. Note the rather large deviations, especially at the high concentration
ranges in FeCr. Conditions: W tube 45 kV, in a spectrometer with an incidence angle of 63 and 33
take-off angle.

the relative intensity of FeKa as a function of the weight fraction of Fe in FeNiCr spe-
cimens. There is considerable spread of the intensity of FeKa, even for a constant weight
fraction of Fe. For specimens with a weight fraction of 0.10 Fe, the relative intensity of
FeKa varies between 0.036 and 0.16 (points marked 1 and 2 in Fig. 14). This is due to the
Copyright 2002 Marcel Dekker, Inc.
Table 8 Composition of the Specimens Used for the Calculations of the
Coefcients for the Linear Approximation According to Rousseaus Algorithm,
in Weight Fraction

Specimen No. Wi Wj Wk

1 0.20 0.80 0.0


2 0.80 0.20 0.0
3 0.20 0.0 0.80
4 0.80 0.0 0.20
5 0.30 0.35 0.35

rather dierent eect that Ni and Cr have on Fe:Cr is an absorber for FeKa radiation,
whereas the NiK radiation can enhance FeK radiation through the process of secondary
uorescence (enhancement). For these specimens, the matrix eect MFe can be calculated
from Eq. (75). The total matrix eect on Fe, MFe(FeNiCr), in these specimens, at a xed
Fe concentration of 0.1, for example, varies from 0.63 (for 0.1 Fe in FeNi, point 2) to 2.8
(for 0.1 Fe in FeCr, point 1).
Now, the problem is how to calculate the matrix eect in this case, based on in-
uence coecients. Assume a specimen with the following composition: WFe 0.1,

Figure 14 The relative intensity of FeKa as a function of the concentration of Fe, in the presence
of Ni and Cr. For every given weight fraction of Fe, the highest value of the intensity is obtained for
the binary system FeNi (enhancement), whereas the lowest intensity is for the binary FeCr
(absorption). The intermediate values are for ternary specimens, where the concentrations of Ni and
Cr vary in steps of 0.1 weight fraction. At a weight fraction of Fe 0.7, the four data points labeled
a, b, c, and d represent the following specimens (WFe, WNi, WCr): a (0.7, 0.0, 0.3), b (0.7, 0.1,
0.2), c (0.7, 0.2, 0.1), and d (0.7, 0.3, 0.0). Points labeled 1 and 2: see text. Experimental
conditions: W tube at 45 kV, 1-mm Be window, incidence and take-off angles 63 and 33 ,
respectively.

Copyright 2002 Marcel Dekker, Inc.


WCr 0.3, and WNi 0.6 (again, all concentrations are expressed as weight fractions). The
total matrix eect on Fe will be caused by both Ni and Cr and its magnitude will be
between MFe(FeNi; WFe 0.1) 0.63 (for Fe in FeNi) and MFe(FeCr; WFe 0.1) 2.8
(for Fe in FeCr). It is assumed that the total eect MFe(FeNiCr; WFe 0.1) is proportional
to the concentrations of Ni (0.6) and Cr (0.3) in this example. Applying the law of
weighted averages, the total matrix eect is given by
WNi
MFe FeNiCr; WFe 0:1 MFe FeNi; WFe 0:1
WNi WCr
WCr
MFe FeCr; WFe 0:1 120
WNi WCr
There is not a strict derivation indicating the validity of Eq. (120) in the general case.
For cases involving absorption only, the derivation is rather straightforward and based
on the additivity law for absorption. For now, let it suce to indicate that the matrix
eect will change gradually when adding an element or when changing the composition
of the specimen slightly; this is described by Eq. (120). By substituting the numerical
values,
0:6 0:3
MFe FeNiCr; WFe 0:1 0:63 2:8 1:35 121
0:6 0:3 0:6 0:3
a value of 1.35 is obtained. This is in good agreement with the theoretical value of 1.31.
Equation (120) is based on the availability of binary inuence coecients calculated at
specimen compositions given by WFe 0.10; WNi 0.90 and WFe 0.10; WCr 0.90.
Using the more general expressions for matrix eects,
MFe FeNi 1 mFeNi;bin WNi;bin 122a
and
MFe FeCr 1 mFeCr;bin WCr;bin 122b
the following is obtained:
WNi
MFe FeNiCr 1 mFeNi;bin WNi;bin
WNi WCr
WCr
1 mFeCr;bin WCr;bin 123
WNi WCr
where WNi,bin is the concentration of Ni in the binary FeNi (for example, CNi,bin 0.90).
Because
WNi;bin WCr;bin 1  WFe WNi WCr 124
Equation (123) can be rearranged to
MFe FeNiCr 1 mFeNi;bin WNi mFeCr;bin WCr 125
stressing the point again that the inuence coecients are to be calculated for binaries ij
with the composition Wi ; Wj 1  Wi . This is the reason why Eqs. (106) and (117) use
Wm instead of Wj.
However, applying binary coecients to multielement specimens leads to an in-
complete matrix correction because we are trying to describe the eect of Cr and Ni on
Fe in FeNiCr based on the matrix eects in the corresponding binaries only. This eect
Copyright 2002 Marcel Dekker, Inc.
is referred as the crossed eect and has been described by Tertian (1987), who also
proposed a method to correct for this. The proposed method (Tertian, 1987) involves
the use of weighting factors based on the reciprocals of the relative intensities of the
binaries involved. It is a rather cumbersome method, but theoretically valid, and it does
not imply any approximation whatsoever; a discussion is outside the scope of this work.
An easier method is to use the cross-product coecients as used in Eq. (99). The
derivation from Tertian and Vie le Sage (1977) oers some insight in this matter.
Tertian and Vie le Sage (1977) assume that a multielement inuence coecient aM ij can
be approximated as the sum of the binary coecient aBij and a linear variation with the
other elements:

ij aij tijk Wk
aM 126
B

where tijk is a coecient expressing the eect of element k on the inuence coecient aM
ij .
Similarly,

ik aik tikj Wj
aM 127
B

Substituting Eqs. (126) and (127) in


1 aM
ij Wj aik Wk
M
128
(the superscript M is used to explicitly indicate the use of multi element inuence coe-
cients) yields
1 aBij Wj aBik Wk aijk Wj Wk 129
with
aijk tijk tikj 130
It is to be realized that crossed eect is introduced by the use of binary coecients; use of
multi element coecients would not lead to crossed eect. Thus, the equation expressing
the matrix eect using binary inuence coecients for specimens with more than two
compounds is
" #
Xn n1 X
X n
Mi 1 mij;bin Wj aijk Wj Wk 131
j1 j1 kj1
j6i j6i k6i

and is based on cross-product coecients to correct for crossed eect introduced by the
use of binary inuence coecients. The use of the cross-product coecients is not
mandated by the concentration range to be covered (the binary coecients as calculated
by, for example, the algorithm of Lachance are more than adequate) but is a consequence
of the use of binary coecients.

7. Application
In Secs.V.D.4 and V.D.5, several inuence coecient algorithms have been discussed.
Application of the resulting equations for calibration and analysis will be discussed here
and is equally valid for any of the inuence coecient algorithms.
a. Calibration
Step 1. It is assumed that the coecients have been calculated from theory, for
example, using Eq. (84) or (97).
Copyright 2002 Marcel Dekker, Inc.
Step 2. Calculate the matrix correction term [the square brackets in Eq. (81), (85),
(99), (104), and (109)] for all standard specimens and for a given analyte. The coecients
are known (Step 1), and for standard specimens, all weight fractions Wi and Wj are
known.
Step 3. Plot the measured intensity of the analyte, multiplied by the corresponding
matrix correction term against analyte weight fraction. Then, determine the best line,

Wi Bi Ki Ii 1    132
by minimizing DWi (see Sec. V.A). Note that Eq. (132) is more general than Eq. (50),
which does not correct for matrix eects. This process is repeated for all analytes. Other
methods are also feasible. The most common variant is the one where
W
Bi Ki Ii 133
1   
is used. This is nearly equivalent to Eq. (132) but with brackets:
Wi Bi Ki Ii 1    134
The term Bi Ki Ii is related directly to the relative intensity Ri. Corrections for line
overlap should only aect this term.

b. Analysis
For each of the analytes, a set of equations has to be solved for the unknown Wi, Wj and
so forth. If the matrix correction term used is the one according to Lachance and Trail
[Eq. (81)] or de Jongh [Eq. (85)], then the set of equations can be solved algebraically (n
linear equations with n unknowns for Lachance and Traill and n1 equations with n1
unknowns for de Jongh). Mostly, however, an iterative method is used. As a rst estimate,
one can simply take the matrix correction term equal to 1. This yields a rst estimate of the
composition Wi, Wj, and so on. This rst estimate is used to calculate the matrix cor-
rection terms for all analytes. Subsequently, a new composition estimate can be obtained.
This process is repeated until none of the concentrations changed between subsequent
iterations by more than a preset quantity.
If the matrix correction is done using algorithms which use more than one coecient
{e.g., Claisse and Quintin [Eq. (99)] or Rasberry and Heinrich [Eq. (104)]}, then the
equations are not linear in the unknown concentrations and an algebraic solution is not
possible. An iterative method, such as described earlier can be used.

8. Algorithms with Empirical Coefficients


Empirical coecients are coecients that are not calculated from theory but from ac-
tually measured specimens using regression analysis (Anderson et al., 1974). They were
the basis of the earliest correction methods, but now they are largely superseded by more
theoretical ones. Such empirically determined coecients tend to mix the matrix correc-
tion with the sensitivity of the spectrometer. On the one hand, the matrix eect is de-
termined by the composition of the sample and physical parameters such as take-o
and incidence angles and tube anode and voltage. These are the same for spectrometers of
similar design. The sensitivity of the spectrometer, on the other hand, depends on the
reectivity of the crystals, the eciency of the detectors, and so on. These parameters are
unique for each spectrometer. Also, if one of the analyte lines is overlapped by another
x-ray line, some of this eect can also aect the value of the inuence coecients. The
Copyright 2002 Marcel Dekker, Inc.
coecients thus determined are instrument-specic and are not transferable to other
instruments.
Stephenson (1971) has noted that the regression equations involved in the determi-
nation of the coecients in such an empirical way become unstable as the degree of
correlation between the independent variables increases. This mandates careful planning
of the experiment, including the composition of the synthetic standards. Klimasara (1994,
1995) has illustrated the use of standard spreadsheet programs for the calculation of the
values of empirical inuence coecients and composition.

a. The Sherman Algorithm


Sherman (1953) was among the rst to propose an algorithm for correction of matrix
eects. For a ternary system, the algorithm can be represented by the following set of
equations:
aAA  tA WA aAB WB aAC WC 0
aBA WA aBB  tB WB aBC WC 0 135
aCA WA aCB WB aCC  tC WC 0

where aij represents the inuence coecient of element j on the analyte i and ti is the time
(in s) required to accumulate a preset number of counts. The constants aij are determined
from measurements on specimens with known composition. Determination of the com-
position of an unknown involves the solving of the above set of linear equations
[Eq. (135)]. This set, however, is homogeneous: Its constant terms are all equal to zero. So,
only ratios among the unknown Wi can be obtained. In order to obtain the weight frac-
tions Wi, an extra equation is required. Sherman proposed using the sum of all the weight
fractions of all the elements (or components) in the specimen, which ideally, should be
equal to unity. For a ternary specimen,

WA WB WC 1 136

Using Eq. (136), one of the equations in the set of Eqs. (135) can be eliminated. The
solution obtained, however, is not unique: For a ternary, any one of the three equations
can be eliminated. This yields three dierent combinations. Furthermore, any of the
three elements can be eliminated in each of the combinations. Hence, a total of 363 9
dierent sets can be derived from Eqs. (135) and (136), and each of these sets will
generate dierent results. In general, the algorithm yields n2 dierent results for a system
with n elements or compounds. This is clearly undesirable, because it is hard to de-
termine which set will give the most accurate results. Another disadvantage is the fact
that the sum of the elements determined always equals unity, even if the most abundant
element has been neglected. Furthermore, the numerical values of the coecients de-
pend, among other parameters such as geometry and excitation conditions, also on the
number of counts accumulated. Nonquantiable parameters, such as reectivity of the
diracting crystal used in wavelength-dispersive spectrometers, or tube contamination
will also aect the value of the coecients. The coecients determined on a given
spectrometer cannot be used with another instrument; they are not transferable.
The other algorithms discussed use some form of a ratio method: The Lachance and
Traill algorithm, for example, uses relative intensities. The measurements are then done,
relative to a monitor; this reduces, or eliminates, the eect of such nonquantiable
parameters.
Copyright 2002 Marcel Dekker, Inc.
b. The Algorithm of Lucas-Tooth and Price
Lucas-Tooth and Price (1961) developed a correction algorithm, where the matrix eect
was corrected for, using intensity (rather than concentration) of the interfering elements.
The equation can be written as
" #
X n
Wi Bi Ii k0 kij Ij 137
j6i

where Bi is a background term and k0 and kij are the correction coecients. A total of
n 1 coecients have to be determined, requiring at least n 1 standards. Usually,
however, a much larger number of standards is used. The coecients are then determined
by, for example, a least-squares method. The corrections for the eect of the matrix on the
analyte are done via the intensities of the interfering elements; their concentrations are not
required. The method assumes that the calibration curves of the interfering elements
themselves are all linear; the correction is done using intensities rather than concentra-
tions. The algorithm will, therefore, have a limited range. Its use will be limited to ap-
plications where only one or two elements are to be analyzed (it still involves
measurements of all interfering element intensities) and where a computer of limited
capabilities is used (although calculation of the coecients involves much more compute
capabilities than the subsequent routine analysis of unknowns).
The advantages of the method are as follows:
The method is very fast, because the calculation of composition of the unknowns
requires no iteration.
Analysis of only one element is possible; this requires, however, the determination of
all relevant correction factors.
Very simple algorithm, requiring very little calculation.
c. Algorithms Based on Concentrations
Algorithms similar to Eq. (137) have been proposed, using corrections based on con-
centrations rather than intensities. The values of the coecients were then to be derived
from multiple-regression analysis on a large suite of standards. The main aim was to
obtain correction factors that could be determined on one spectrometer and used, without
alteration, on another instrument. In practice, the coecients still have to be adjusted
because of the intimate and inseparable entanglement of spectrometer-dependent factors
with matrix eects. Furthermore, compared to the algorithms based on intensities, some of
the advantages of the latter are not retained: A calibration for all elements present is now
required, calculation of the composition of unknowns requires iteration, and so forth.
In principle, methods based on theoretically calculated inuence coecients are re-
commended.

VI. CONCLUSION

Among the advantages of XRF analysis are the facts that the method is nondestructive
and allows direct analysis involving little or no specimen preparation. Analysis of major
and minor constituents requires correction for matrix eects of variable (from one spe-
cimen to another) magnitude. If the matrix varies appreciably from one specimen to the
next, then even the intensity of elements present at a trace level can be subject to matrix
eects and a correction is required. Several methods for matrix correction have been
Copyright 2002 Marcel Dekker, Inc.
described. Each of these methods have their own advantages and disadvantages. These, by
themselves, do not generally lead to the selection of best method. The choice of the
method to use is also determined by the particular application.
From the previous sections, it may appear that the mathematical methods are more
powerful than the compensation methods. Yet, if only one or two elements at a trace level
in liquids have to be determined, compensation methods (either standard addition or the
use of an internal standard) can turn out to be better suited than, for examples, rigorous
fundamental parameter calculations. Compensation methods will correct for the eect of
an unknown, but constant, matrix. Also, they do not require the analysis of all con-
stituents in the specimen. The mathematical methods (fundamental parameters as well as
methods based on theoretical inuence coecients), on the other hand can handle cases in
which the matrix eect is more variable from one specimen to another. In this respect, they
appear to be more exible than the compensation methods, but they do require more
knowledge of the complete matrix. All elements contributing signicantly to the matrix
eect must be quantied (either by x-ray measurement or by another technique) even if the
determination of their concentrations is not required by the person who submits the
sample to the analyst. Once a particular algorithm is selected, it is customary to use for all
analytes. However, it must be stressed that this is not a requirement. There is only one
requirement for adequate matrix correction: Each analyte should be corrected adequately,
by whatever method.
If complete analysis (covering all major elements) is required, the analyst has the
choice between the fundamental parameter method and algorithms, based on inuence
coecients. Commonly, fundamental parameter methods are (or were) used in research
environments rather than for routine analysis in industry. This choice is more often made
on considerations such as the availability of the programs and computers than on dif-
ferences in analytical capabilities. Inuence coecient algorithms tend to be used in
combination with more standards compared to fundamental parameter methods, because
their structure and simple mathematical representation facilitates interpretation of the
data (establishing a relationship between concentration and intensity, corrected for matrix
eect). The nal choice, however, has to be made by the analyst.

REFERENCES

Anderson CH, Mander JE, Leitner JW. Adv X-Ray Anal 17:214, 1974.
Australian Standard 2563-1982, Wavelength Dispersive X-ray Fluorescence Spectrometers
Methods of Test for Determination of Precision. North Sydney, NSW: Standards Association
of Australia, 1982.
Bambynek W, Crasemann B, Fink RW, Freund HU, Mark H, Swift CD, Price RE, Venugopala
Rao P. Rev Mod Phys 44:716, 1972.
Bearden JA. Rev Mod Phys 39:78, 1967.
Beattie HJ, Brissey RM. Anal Chem 26:980, 1954.
Bonetto RD, Riveros JA. X-Ray Spectrom 14:2, 1985.
Claisse F, Quintin M. Can Spectrosc 12:129, 1967.
Criss JW. Adv X-Ray Anal 23:93, 1980a.
Criss JW. Adv X-Ray Anal 23:111, 1980b.
Criss JW, Birks LS. Anal Chem 40:1080, 1968.
Criss JW, Birks LS, Gilfrich JV. Anal Chem 50:33, 1978.
de Boer DKG. Spectrochim Acta 44B:1171, 1989.
de Jongh WK. X-Ray Spectrom 2:151, 1973.

Copyright 2002 Marcel Dekker, Inc.


de Jongh WK. X-Ray Spectrom 8:52, 1979.
DeGroot PB. Adv X-Ray Anal 33:53, 1990.
Draper NR, Smith H. Applied Regression Analysis. New York: Wiley, 1966.
Feather CE, Willis JP. X-Ray Spectrom 5:41, 1976.
Garbauskas MF, Goehner RP. Adv X-Ray Anal 26:345, 1983.
Gillam E, Heal HT. Br J Appl Phys 3:353, 1952.
Heinrich KFJ. In: McKinley TD, Heinrich KFJ, Wittry DB, eds. The Electron Microprobe.
New York: Wiley, 1966, p 296.
Holynska B, Markowicz A. X-Ray Spectrom 10:61, 1981.
Hower J. Am Mineral 44:19, 1959.
Hughes H, Hurley P. Analyst 112:1445, 1987.
Hunter CB, Rhodes JR. X-Ray Spectrom 1:107, 1972.
Ingham MN, Vrebos BAR. Adv X-Ray Anal 37:717, 1994.
ISO, Determination of Nickel and Vanadium in Liquid FuelsWavelength-Dispersive X-Ray
Fluorescence Method, ISO 14597. Geneva: ISO, 1995.
Johnson W. International Report BISRA MG=D=Conf Proc=610=67, 1967.
Kaufmann M, Mantler M, Weber F. Adv X-Ray Anal 37:205, 1994.
Klimasara AJ. Adv X-Ray Anal 37:647, 1994.
Klimasara AJ. Workshop on the Use of Spread Sheets in XRF Analysis, 44th Annual Denver X-Ray
Conference, Colorado Springs, CO, 1995.
Lachance GR. X-Ray Spectrom 8:190, 1979.
Lachance GR. International Conference on Industrial Inorganic Elemental Analysis, Metz, France,
1981.
Lachance GR. Adv X-Ray Anal 31:471, 1988.
Lachance GR, Claisse F. Adv X-Ray Anal 23:87, 1980.
Lachance GR, Traill RJ. Can Spectrosc 11:43, 1966.
Leroux J, Thinh TP. Revised Tables of Mass Attenuation Coecients. Quebec: Corporation
Scientique Claisse, 1977.
Li-Xing Z. X-Ray Spectrom 13:52, 1984.
Lubecki A, Holynska B, Wasilewska M. Spectrochim Acta 23B:465, 1968.
Lucas-Tooth HJ, Price BJ. Metallurgia 64:149, 1961.
Mainardi RT, Fernandez JE, Nores M. X-Ray Spectrom 11:70, 1982.
Mantler M. Adv X-Ray Anal 36:27, 1993.
McMaster WH, Delgrande NK, Mallet JH, Hubbel JH. Compilation of X-Ray Cross Sections,
UCRL 50174, Sec II, Rev 1, 1969.
Muller RO. Spectrochemical Analysis by X-Ray Fluorescence. New York: Plenum Press, 1972,
chap 9.
Pella PA, Feng LY, Small JA. X-Ray Spectrom 14:125, 1985.
Pella PA, Tao GY, Lachance GR. X-Ray Spectrom 15:251, 1986.
Pollai G, Ebel H. Spectrochim Acta 26B:761, 1971.
Pollai G, Mantler M, Ebel H. Spectrochim Acta 26B:733, 1971.
Rasberry SD, Heinrich KFJ. Anal Chem 46:81, 1974.
Rousseau RM. X-Ray Spectrom 13:121, 1984a.
Rousseau RM. X-Ray Spectrom 13:3, 1984b.
Rousseau RM, Claisse F. X-Ray Spectrom 3:31, 1974.
Sherman J. The Correlation Between Fluorescent X-Ray Intensity and Chemical Composition.
ASTM Special Publication No 157. Philadelphia: ASTM, 1953, p 27.
Sherman J. Spectrochim Acta 7:283, 1955.
Shiraiwa T, Fujino N. Jpn J Appl Phys 5:886, 1966.
Shiraiwa T, Fujino N. Bull Chem Soc Japan 40:2289, 1967.
Shiraiwa T, Fujino N. Adv X-Ray Anal 11:63, 1968.
Shiraiwa T, Fujino N. X-Ray Spectrom 3:64, 1974.
Sparks CJ. Adv X-Ray Anal 19:19, 1976.

Copyright 2002 Marcel Dekker, Inc.


Stephenson DA. Anal Chem 43:310, 1971.
Tao GY, Pella PA, Rousseau RM. NBSGSC, a FORTRAN program for quantitative X-Ray
Fluorescence Analysis. NBS Technical Note 1213. Gaithersburg, MD: National Bureau of
Standards, 1985.
Tertian R. Adv X-Ray Anal 19:85, 1976.
Tertian R. X-Ray Spectrom 16:261, 1987.
Tertian R, Vie le Sage R. X-Ray Spectrom 6:123, 1977.
Vrebos BAR, Helsen JA. X-Ray Spectrom 15:167, 1986.
Vrebos BAR, Pella PA. X-Ray Spectrom 17:3, 1988.
Veigele WJ. In: Robinson JW, ed. Handbook of Spectroscopy. Cleveland, OH: CRC, 1974, p 28.
Venugopala Rao P, Chen MH, Crasemann B. Phys Rev A 5:997, 1972.
Wadleigh KR. X-Ray Spectrom 16:41, 1987.
Wood PR, Urch DS. J Phys F: Metal Phys 8:543, 1978.

SUGGESTIONS FOR FURTHER READING

Bertin EP. Principles and Practice of X-Ray Spectrometric Analysis. 2nd ed. New York: Plenum
Press, 1975.
Jenkins R, Gould RW, Gedcke D. Quantitative X-Ray Spectrometry. 2nd ed. New York: Marcel
Dekker, 1995.
Tertian R, Claisse F. Principles of Quantitative X-Ray Fluorescence Analysis. New York: Wiley,
1982.

Copyright 2002 Marcel Dekker, Inc.

Vous aimerez peut-être aussi