Académique Documents
Professionnel Documents
Culture Documents
Johan L. de Vries*
Eindhoven, The Netherlands
Bruno A. R. Vrebos
Philips Analytical, Almelo, The Netherlands
I. INTRODUCTION
*Retired.
depth t below the surface, the remaining fraction of the intensity It(l0) is given by the
LambertBeer law:
It l0 I0 l0 expms l0 rs t csc c0 2
ms(l0) is the mass-attenuation coecient of the specimen for photons with wavelength l0
(the subscript s refers to the specimen) and rs is the density of the specimen. Note that due
to the angle of incidence, the path length traveled is given by t csc c0 . The mass-attenuation
coecient ms(l0) in this equation is thus calculated for the specimen; this is done by adding
the mass-attenuation coecients for all elements j present in the specimen, each multiplied
with its mass fraction Wj:
X
n
ms l0 mj l0 Wj 3
j1
where n is the total number of elements present in the specimen. The fraction of the in-
cident beam absorbed by the analyte i in the layer between t and (t dt) is given by
Wi mi l0 rs csc c0 dt 4
It is assumed here that the composition of the specimen is uniform throughout. In other
words, Wi and Wj are independent of the position of the layer (t, t dt) within the
specimen.
Only a fraction of the photons absorbed creates vacancies in the K shell; this fraction
is given by (riK 1)=riK, where riK is the absorption jump ratio of the K shell of element i.
The fraction of K vacancies emitting x-rays is given by the uorescence yield oiK. The
fraction of Ka photons in the total of x-rays emitted for the analyte is given by the
transition probability fiKa. These factors can be combined in a factor designated Qi(l0, li):
riK 1
Qi l0 ; li oi fiKa 5
riK
In the above derivation, it is assumed that the characteristic line of interest is a Ka. If
another line is considered, the relevant changes need to be made.
The characteristic photons thus generated are isotropically emitted in all directions,
without a preferential direction. Only a fraction is emitted toward the detector. If O is the
solid angle as viewed by the collimator detector system, expressed in steradians, then the
fraction is given by O=4p. The angle O should be small enough so that the beam can be
considered to be a parallel beam leaving the specimen at a single well-dened angle c00
with the surface. The fraction of characteristic photons with wavelength li not absorbed
between the layer (t, t dt) and the surface is given by
Copyright 2002 Marcel Dekker, Inc.
expms li rs t csc c00 6
All photons reaching the surface and propagating in the direction indicated by the
angle c00 are assumed to be detected. If the detector has absorbing elements (such as
windows) or detection eciencies dierent from unity, then these can also be taken into
account. Also, the attenuation by the medium (e.g., air) between specimen and detector
can be calculated using similar expressions.
The intensity of element i, as excited by the incident beam with wavelength l0, is
labeled Pi(l0) (explicitly denoting the primary uorescence eect) and is given by
Pi l0 I0 l0 expms l0 rs t csc c0
O 7
mi l0 Wi Qi l0 ; li rs cscc0 dt expms li rs t csc c00
4p
Combining factors leads to
O
Pi l0 I0 l0 mi l0 Wi Qi l0 ; li csc c0
4p
expms l0 csc c0 ms li csc c00 rs trs dt 8
The contribution of all layers between the surface and the bottom of the specimen have to
be summed. This can be done by integrating the above expression over dt, from 0 (the surface)
to the bottom. In practice, for x-rays, the thickness of bulk specimens can be considered to
be innite, so the upper limit is 1 . Note that t is always used in combination with rs, so the
integration will be done over rst. Taking all constant factors outside the integral, one obtains
O
Pi l0 I0 l0 mi l0 Wi csc c0 Qi l0 ; li
4p
Z1
expms l0 csc c0 ms li csc c00 rs trs dt 9
0
Taking a (ms(l0) csc c0 ms(li) csc c00 ) and noting that rs dt d(rst), the following
expression is obtained for the primary uorescence of the analyte i in the specimen s:
I0 l0 mi l0 Wi csc c0 Qi l0 ; li O=4p
Pi l0 11
ms l0 csc c0 ms li csc c00
Often, the element specic factors, given by Eq. (5), are combined with the instrument
specic factor O=4p. This leads to a simpler expression for the primary uorescence:
mi l0
Pi l0 Ki I0 l0 Wi 12
ms l0 Gms li
where Ki O=4pQi l0 ; li and G csc c00 =csc c0 sin c0 = sin c00 .
In the above derivation, the following assumptions have been made:
(a) The specimen is completely homogeneous.
(b) The specimen extends to innity in three dimensions.
Copyright 2002 Marcel Dekker, Inc.
(c) The primary rays are not scattered on their way to the layer dt.
(d) No enhancement eects occur.
(e) The characteristic radiation is not scattered on its way to the specimen surface.
The simplest case is excitation of Ka (or Kb) photons. For the characteristic photons
associated with the L lines, other eects, such as CosterKronig transitions and so forth
have to be taken into account when describing the fraction of absorbed primary photons
that give rise to characteristic photons. For a more detailed discussion on these eects,
refer to Chapter 1.
There are two criteria to be satised for a characteristic photon lj to be able to cause
secondary uorescence:
1. It must be excited by the incident photon l0 (this means that the energy of the
incident photon must be higher than the energy of the absorption edge
associated with lj).
2. The energy of the photon lj must be higher than the energy of the absorption
edge for the analyte.
The derivation of the intensities of characteristic x-rays as a function of specimen
composition has rst been done by Sherman (1955). The resulting equations, however,
were rather unwieldy. Equations (9) and (13) have been published later by Shiraiwa and
Fujino (1966) and Sparks (1976).
In practice, many computer programs only consider a few lines for enhancement,
and lines that have a low transition probability f are usually neglected. Data regarding
these transition probabilities for a line within a series can be found, for example, in
Appendix II of Chapter 1. The enhancement phenomenon will be more pronounced if the
x-rays of the enhancing elements are only slightly more energetic than the energy of the
absorption edge of the element i. The enhancement may contribute up to 4050% of
the total uorescent radiation Ii , especially where the concentration of the enhancing
elements is much greater than the concentration of the analyte. This applies even more for
the light elements, where the primary spectrum may not be very eective as the most
intense wavelengths are far away from the absorption edges of these light elements.
The eect of scattered radiation is generally ignored, although its contribution can
also be calculated (Pollai et al., 1971).
If the characteristic line of the analyte considered is one of its L lines, then it is
possible that other lines of the analyte itself enhance the characteristic line considered. The
energy of the K edge of La (atomic number Z 57) is 38.9 keV (see Chapter 1), so the K
lines of all elements with Z < 57 are excited if, for example, an x-ray tube is used at
Copyright 2002 Marcel Dekker, Inc.
Table 1 Data for Selected Absorption Edges and Characteristic Line of Pb
K 88.04
L1 15.86 La1(L3M5) 10.55
L2 15.20 Lb1(L2M4) 12.61
L3 13.04 Lg1(L2N4) 14.76
Source: Appendix I and Appendix II of Chapter 1 (this volume).
Also, tertiary uorescence is possible, where the incident photon excites element k (pri-
mary uorescence), whose radiation excites element j (secondary uorescence), whose
radiation, in turn, excites element i, causing tertiary uorescence. This contribution is
generally lower than 3% of the total uorescence and is commonly ignored, as shown by
Shiraiwa and Fujino (1967, 1974).
3. Excitation by Continuous Spectra
If the incident beam is polychromatic rather than monochromatic, Eq. (15) needs to be
calculated for each wavelength. Wavelengths longer than the wavelength of the absorption
edge ledge,i of the analyte cannot excite uorescence, so these need not to be considered. If
Jl is the function representing the tube spectrum, then Eqs. (11) and (13) can still be
used, provided that I0 l0 is replaced by Jl, representing the intensity of the incident
spectrum at wavelength l:
lZedge; i lZedge; i " #
X
Ii Ii Jl dl Pi l Sij l; lj dl 16
j
lmin lmin
This equation allows the calculation of the intensity of characteristic radiation of a given
analyte in an innitely thick specimen. Equations such as Eqs. (11), (14), and (16) are often
referred to as fundamental parameter equations because they allow the calculation of the
intensity of uorescent radiation as a function of the composition of the specimen (weight
fractions Wi ), the incident spectrum [Jl], and the conguration of the spectrometer used
(c0 ; c00 , and O). All other variables used are fundamental constants, such as the mass-at-
tenuation coecients for a given element at a given wavelength or its uorescence yield
and so forth.
Copyright 2002 Marcel Dekker, Inc.
C. Some Observations
The integration over t in Eq. (9) is taken from zero to innity. It is obvious that the rst
layers contribute more to the intensity of li than the more inward layers. Theoretically,
even at large values of t, a very minor contribution to the intensity is still to be expected.
Often, the (minimum) innite depth is dened arbitrarily as that thickness t where the
contribution of the layer t; t dt is 0.01% of that of the surface layer. In this case, it is
dened relative to the surface layer. Alternatively, it is dened as the thickness where the
contribution to the total intensity is less than 1%. In this case, it is relative to the total
intensity from a truly innitely thick specimen. The value of the innite thickness de-
pends on the value of the absorption coecients and the density of the specimen. In
practice, it may vary from a few micrometers for heavy matrices and long wavelengths to
centimeters for short wavelengths and light matrices, as in solutions.
For a given element and a xed geometry, an eciency factor Cl0 ; li can be in-
troduced:
mi l0
Cl0 ; li Pn Pn 17
j1 Wj mj l0 G j1 Wj mj li
P
It is obvious that the terms Wj mj are the origin of nonlinear calibration lines, as
variations in Wj inuence the value of the denominator. For a pure metal, this reduces to
mi l0
Cl0 ; li 18
mi l0 Gmi li
In the general case, the wavelengths in the primary spectrum close to the absorption
edge are the most eective in exciting the analyte i. The eciency factor Cl0 ; li is thus a
combination of the absorption curve of analyte i as a function of l and the spectral dis-
tribution. In a rst approximation, an eective wavelength le can be introduced, which
has the same eect of excitation of element i as the total primary spectrum. The exact value
of this le will be inuenced by the characteristic tube lines if they are active in exciting i.
Otherwise, le can, in general, be assumed to have a value of approximately two-thirds of
the absorption edge ledge. Its actual value is, however, dependent on the chemical com-
position of the specimen. For instance, for Fe, the wavelength of the K edge ledge is
0.174 nm (7.11 keV) and an eective wavelength le of 0.116 nm is obtained using this rule
of thumb. In the ZnOFe2O3 system, le was found to vary from 0.130 nm for 100% Fe2O3
to 0.119 nm for 10% Fe2O3 in ZnO. The estimated value of 0.116 nm is in good agreement
with the experimental one for Fe2O3.
Another interesting case is the analysis of a heavy element i in a light matrix. In
the summation in the denominator of Eq. (12), the terms Wi mi l0 and Wi mi li are
then the most important. The other terms can, in rst approximation, be neglected if
Wi is not small. However, that means that the terms Wi in the numerator and the
denominator cancel and the measured intensity becomes independent of Wi ; thus, the
analysis becomes impossible in this extreme case. A solution to this problem is found
by making the inuence of the terms Wi mi l in the denominator less dominating by
adding a large term Wa ma l. This can be done be making Wa large (e.g., diluting) or
ma l large, by adding heavy absorber. Equation (17) enables one to calculate before-
hand how large this term should be to eliminate uctuations in the concentration of the
other elements.
In the derivation of the uorescence of the analyte i in the preceding paragraphs, the
following simplications were made:
Copyright 2002 Marcel Dekker, Inc.
1. First, it was assumed that the primary rays follow a linear path to the layer dt at
depth t. However, the primary rays may also be scattered. In general, the loss in
intensity of the primary beam of photons due to scattering may be neglected.
These scattering eects become more important when the primary x-rays are
more energetic and the average atomic number of the matrix decreases. This
scatter may give a higher background in the secondary spectrum, thus leading to
a poorer precision of the analysis. On the other hand, the excitation eciency
may be enhanced, as the primary rays dwell longer in the active layers, thus
having a higher probability to encounter atoms of element i. This eect may
overrule the increase in intensity of the background radiation. A case in hand is
the determination of Sn in oils which gives better results using the SnK lines at a
high x-ray tube voltage, than using the SnL lines at moderate voltages.
Incidentally, this scattering of the primary radiation makes it possible to check
the voltage over the x-ray tube. According to Braggs law, the intensity of the
primary spectrum is zero at an angle y0, given by
1 nlmin
y0 sin 19
2dcrystal
where 2dcrystal is the 2d spacing of the crystal used and n is an integer number.
lmin (in nm) is given by
1:24
lmin 20
V
where V is the voltage on the x-ray tube, expressed in kilovolts. In practice, a
lower value for V will be found when Compton scattering dominates over
Rayleigh scattering and thus lmin found has a value too low by the Compton
shift, which is about 0.024 nm for most spectrometers; the actual value depends
on the incidence and exit angle (see also Sec. V.C.1.)
2. The integral in Eq. (9) was taken from zero to innity; further, it was assumed
that the specimen is completely homogeneous. This, of course, is never realized
in practice, as we are dealing with discrete atoms in chemical compounds. In
powders, the dierent compounds may have a tendency to cluster. The particles
will, in general, have dierent sizes and shapes. Putting the sample into solution,
either aqueous or solid (melt) may overcome this problem.
It was stated formerly that innite thickness may vary from 20 mm to a few cen-
timeters. However, the most eective layers are much thinner. Thus, the number of dis-
crete particles actually contributing to the uorescent radiation may be rather small.
B. Random Errors
1. Counting Statistics
If an x-ray measurement consisting of the determination of a number of counts N is re-
peated n times the results N1,N2, N3, . . . ,Nn would spread about the true value N0. If n is
large, the distribution of the measurements would follow a Gaussian distribution,
" #
1 N N0 2
WN p exp 24
2pN 2N
p
provided N is also large. The standard deviation s of the distribution is equal to Nmean ,
again if n and N are large, where Nmean is the mean of n determinations. From the
properties of the Gaussian distribution, the following hold:
68.3% of all values N will be between N0s and N0 s.
95.4% of all values N will be between N02s and N0 2s.
99.7% of all values N will be between N03s and N0 3s.
p
is a certain probability that the true result N0 will lie between N N
Similarly,pthere
and N N, assuming the same distribution for N and N0. Measurement results are
commonly expressed as a count rate (intensity per unit time) instead of an intensity,
which gives the number of counts collected in the counting interval. This allows an easier
comparison of measurements made with dierent counting times, but the measuring
time needs to be specied in order to be able to assess the counting statistical error.
Copyright 2002 Marcel Dekker, Inc.
The determined concentration is dependent on the net count rate, which is the peak
count rate Rp minus the background count rate Rb. The total measuring time T equals
tp tb , where tp and tb are the measurement times for peak and background, respec-
tively. In modern equipment, there is no signicant statistical error in the measurement
of t. We can thus assume that R follows the same Gaussian distribution as N with the
same relative standard deviation eN. eN is dened as
sN
eN 25
N
Hence,
p
N 1 1
eN p pp eR 26
N N R t
and
p
R
sR eR R p 27
t
It is obvious that the relative counting error decreases as t increases.
When a net count rate has to be determined the peak, Rp and the background, Rb,
have to be measured; there are thus two independent variables. The standard deviation of
the net intensity, sd, is given by
s
q Rp Rb
sd s2p s2b 28
tp tb
2. Instrumental Errors
If the instrumental and counting uncertainties are random and independent variables, then
q
etot e2instr e2count 40
q
einstr e2tot e2count 41
Although the counting error is inuenced by the instrumental error, Eq. (41) is still a good
approximation. stot can be found from a series of repeated results of one measurement;
thus, scount is known. To check the instrumental instability, all the functions should be
measured separately. A radioactive source 55Fe, for example, can be used to check the
detector and electronic circuitry. The x-ray tube can be checked by repeating the mea-
surements with specimen and goniometer in a xed position, eliminating errors stemming
from the mechanics of the spectrometer.
In another series of experiments, recycling between dierent angles checks the re-
producibility of the goniometer, whereas repositioning the specimen between measure-
ments checks the specimen holder, the reproducibility of the specimen loading mechanics,
and so forth. A comprehensive series of tests for wavelength-dispersive x-ray uorescence
spectrometers is described in the Australian Standard 2563-1982 (1982). Sometimes the
error stot found is smaller than expected, or even smaller than scount . This may indicate that
an unexpected systematic error is involved, or there may be an uncorrected deadtime in the
equipments, (i.e., the true counting rate is higher than measured, which means that the
relative error is smaller).
3. Detection Limit
A characteristic line intensity decreases with decreasing concentration of the analyte and
nally disappears in the background noise. The true background intensity may be con-
stant, but the results of the measurements uctuate around a mean value Rbmean . To be
signicantly dierent from the background, a signal Rp must, although it is larger than
Rbmean , be distinguished from the spread in Rb . In other words, if we measure a signal Rp
larger than Rb and we assume the analyte is present, what is the probability that our
assumption is correct? If the results of the measurements are random and follow a
Gaussian distribution, then this probability is determined by sRb . If the measurement Rp is
higher than Rb 2sRb , then the probability that our assumption is correct is approxi-
mately 95% if a higher certainty is required (e.g., 99.7%), then Rp should be larger than
Rb 3sRb . Thus, the net intensity is 3sRb and the detection limit, DL, would be
3sRb
DL 42
M
where M is the sensitivity in counts per second per percent. So, the detection limit in the
above equation is the concentration corresponding to a net peak intensity of 3sRb .
However, in x-ray spectrometry, the background signal is specimen dependent and cannot
Copyright 2002 Marcel Dekker, Inc.
be measured independently, as in radioactivity measurements. Hence, Rb has to be mea-
sured in an o-peak location in the spectrum. The result Rb found in measuring time t s is
p
assumed to be Rbmean and sRb is assumed to be Rb =t. Thus, two measurements have to
be made: Rp and Rb , each in time t s. The detection limit thus becomes
p r
3 2 Rb
DL 43
M T
where T 2t. If we are satised with a 95% probability that our assumption is correct,
then
p r
2 2 Rb
DL 44
M T
which is roughly equal to
r
3 Rb
DL 45
M T
It is obvious that the detection limit decreases if the counting time increases. However, the
total error in Rb contains the instrumental error as well. Thus, there is no sense to increase
the counting time when the instrumental error dominates.
Ingham and Vrebos (1994) have shown that the detection limit can be improved by
carefully selecting a primary lter. If the application of such a primary beam lter re-
duces the intensity of the (scattered) continuum from the tube more than it aects the
sensitivity, the detection limit is improved. The loss in sensitivity M needs to be more
than compensated for by the reduction in background intensity; as from Eq. (45), the
detection limit is proportional to the square root of Rb and inversely proportional to the
sensitivity M.
6. Particle Statistics
Only a limited volume of the specimen can actually contribute to the uorescent radiation.
As long as this active volume is the same in standards and actual specimens and the atomic
distribution is completely homogeneous, this poses no problem. However, the atoms are
bound into chemical compounds, forming nite particles with dierent chemical compo-
sitions. The analyte may only occur in particles with a certain chemical composition and
not in other particles. Then, only these specic particles can contribute to the uorescent
radiation of the analyte i. Therefore, the count rate Ri measured depends on the number of
those particles present in the active volume, where, evidently, the rst layers contribute
most of the uorescent radiation.
Table 2 gives an indication of the penetration depth of radiation of various wa-
velengths into matrices with varying absorption power. It is evident that for most solid
specimens, the uorescent radiation originates within 20 mm or less from the surface. To
get an idea of how many particles can actually contribute to the uorescent intensity of
analyte i, let us assume that the irradiated area of 10 cm2 is covered with cubic particles
of 100 mm dimension in a random fashion. Assuming a lling factor of 0.8 and assuming
that the analyte i is only present in 10% of the particles, then 10 108 104 101
0:8 8000 particles could be actually contributing. Assuming
p a Gaussian distribution,
then this number would have a standard deviation of 8000 90 particles, or a relative
standard deviation of approximately 1.1%. If the concentration of analyte is only 1%,
then this relative standard deviation would be roughly 3.3%. In practice, these errors
might even be larger, as the radiation of the specimen is not homogeneous because the
primary spectrum originates in a rather small anode and passes through a large window
and is, thus, conically shaped. Spinning the specimen in its own plane during the ana-
lysis will reduce this error. Furthermore, the rst layers are the most eective, having
Table 2 Innite Thickness (in mm for Certain Analytical Lines as a Function of the Matrix)
Analytical line Fe base Mg base H2O solution Borate Borate La2O3 10%
C. Systematic Errors
1. Dead T|me
After an x-ray photon is detected in the counter and accompanying electronics, it takes a
certain time before the counting circuit is ready to accept the next photon. Any photon
entering the counter within this period, called the dead time of the counter circuit, is
simply not registered and is thus lost. This dead time is of the order of a few microseconds.
The counting losses are thus dependent on the actual count rate. The measured count rate
Rm is always lower than the true count rate RT . Their relation can be approximated by the
expression
Rm
RT 47
1 td Rm
where td is the dead time. For instance, if td 1 ms and Rm 105 counts per second, the
dead-time loss is approximately 10%. With modern equipment, very high count rates can
be handled, to reach sucient precision in a short time. It is therefore necessary to reduce
these losses. In most wavelength-dispersive instruments, an automatic dead-time correc-
tion circuit is included. Energy-dispersive instruments, on the other hand, tend to collect
counts for the specied time; thus, the measurement takes longer because the total time
required in this case consists of the specied measuring time (lifetime) and the dead time.
2. Matrix Effects
The uorescent intensity of the analyte i is, as discussed earlier, not only dependent on its
concentration but can also be strongly dependent on the composition of the specimen
itself. The primary rays will be absorbed and scattered and secondary uorescence may
occur. All of these eects depend on the chemical composition of the specimen. The
importance of these eects depends on the concentration of these matrix elements and
their inuence (e.g., their absorption of primary and secondary x-rays). These matrix ef-
fects may introduce large systematic errors when they are not properly accounted for, as
discussed in Sec. V.
2. Spectral Overlap
Two or more characteristic lines may not be completely separated from the analytical line.
This separation may be improved by using a crystal with better dispersion (e.g., a lower d
value). However, the choice must often be made between high intensity and high disper-
sion. If the disturbing line is due to a high-order crystal reection, its inuence may be
strongly reduced by the proper setting of the pulse-height selector. However, in some
cases, the escape peak of the interfering line may be within the pulse-height selector
window. For instance, the third-order reection using a penta erythritol (PE) crystal, of
the characteristic tube lines of a Sc anode, scattered by the specimen will slightly interfere
with the analysis for Al in a light matrix, as the energy of the ScKa escape peak in an Ar-
lled gas detector is very close to the energy of the AlKa line.
Often, the overlap is due to a diagram line of an element of which another diagram
line is free of overlap. As, in general, two diagram lines of one element have a constant
intensity ratio, the measured intensity of the nonoverlapped line of the disturbing element
multiplied by a constant factor (experimentally determined) may be subtracted from the
measured intensity of the analytical line to give the characteristic intensity. Absorption
eects can, however, strongly inuence the ratio. This is most clearly the case when there is
an absorption edge of a major element between the two diagram lines considered.
Copyright 2002 Marcel Dekker, Inc.
3. Primary Radiation Scattered by the Specimen
Photons of all wavelengths present in the white spectrum of the incident beam are scat-
tered by the specimen, including the characteristic lines of the tube anode, giving rise to a
continuous background. If, however, the specimen consists of rather coarse grains, it may
happen that a crystallite is in a favorable position for Bragg diraction for a wavelength of
the continuum; thus, a sharp peak will be found in the spectral analysis. The inuence of
primary tube lines, coherently or incoherently scattered, may be eliminated by the proper
choice of the anode.
5. Satellite Lines
The common wavelength tables give only characteristic K, L, and M lines of most ele-
ments. The M lines of heavy elements may interfere with the K lines of light elements.
Nondiagram satellite lines may also occur, often giving rise to an unexpected background
level. One of the tables that includes such lines is provided by NIH (Bethesda, MD)
(Garbauskas and Goehner, 1983).* The lines that are most obvious to observe are some of
the satellites lines of Al, Si, and P with a wavelength-dispersive spectrometer. In Figure 3,
a spectrum over aluminum is shown, on which some of these lines have been annotated.
*Database made available through C. Fiori, National Institutes of Health, Bethesda, Maryland.
control specimen fall outside a predetermined range, then adequate measures must be
taken. The rst step is usually to perform a correction for drift. DeGroot (1990) has de-
scribed how the use of statistical process control (SPC) can be benecial in this respect.
The SPC charts that can be maintained in this way allow one to check the performance of
the spectrometer system.
C. Drift-Correction Monitors
Correction for drift can be made by measuring selected specimens for each of the ana-
lytes and calculate the ratio between the observed intensities and those obtained when the
calibration was performed. The ratios can then be applied as a correction to the slope
of the calibration graphs or, as is done more often, the measured intensities of the
unknowns are corrected for drift, prior to the conversion to concentration. If drift
correction is performed regularly (e.g., once a day), the drift between subsequent
measurements is very small and, thus, measurements with high precision are required,
otherwise the counting statistical error of the measurement would become dominant.
Drift correction should not even be applied unless the correction is signicant. Because
the precision of an intensity measurement is determined by the number of counts col-
lected, drift-correction monitors are, ideally, specimens on which high count rates can be
obtained. Drift-correction monitors do not have to be specimens with a composition
similar to the unknowns.
As dierent components of the spectrometer (such as the x-ray tube) age, not only will
the sensitivity be aected (this is generally a downward trend), but in some cases, the
background will vary also. To correct for this change in background intensity, measure-
ments of the background must be performed. This poses a problem, inasmuch as the count
Copyright 2002 Marcel Dekker, Inc.
rate on the background is usually low and thus measurements are not vary precise or they
require a long measuring time. However, the considerations on the counting statistical error
made earlier (Sec. III.B.1) oer some suggestions. First, the intensity of the background is
irrelevant when the intensity of the analyte peak is much higher. Obviously, this has to
apply for all specimens to be measured. If that is the case, then variations in the background
intensity are also negligible. When the background matters, it should already be measured
anyway and then drift correction on the background intensity is not required, as the net
count rate is corrected for drift. The only case that is not covered here is the contribution of
spectral contamination from the x-ray tube (e.g., Cu, W, Ag, Fe, etc.). In that case, the
background, including the contamination, must be measured on peak. This implies that
measurements must be done for specimens with zero analyte concentration. Again, two
cases can be distinguished: If the background including the contamination is not important
compared to the count rate observed, then no corrections are required. On the other hand,
if one of the analytes is present at low concentration, then the contribution to the back-
ground due to the contaminant must be checked periodically and taken into account.
After the drift correction is performed, a quality control specimen should be mea-
sured to verify the procedure.
D. Recalibration Standards
Sometimes (e.g., after a major maintenance on a spectrometer), drift correction does not
bring the quality control specimens back in line with the expectations. In those cases, the
calibration curve must be reconstructed. This can be done by measuring all the standard
specimens again and repeating the complete calibration procedure. Because calibrations
often use many standards and validating each calibration is required, this can be a time-
consuming process, even if the validation is limited to a quick visual inspection of the
calibration graphs. In such cases, a recalibration can be performed based on only a few
standards. The idea of recalibration is to reconstruct the calibration graph, without having
to measure all the standards again. This is done by selecting a few standard specimens for
each analyte and measuring these (the top and bottom point in Fig. 4). Subsequently,
when determining the parameters (such as slope and intercept) of the regression, for the
concentrations of these specimens in the calibration the certied values are no longer used,
but the values as found on the calibration line at the time of the original calibration (the
x-ray values) are. These x-ray values have been found based on all standard specimens
used, and the idea is to x the calibration line again through these points. As a result,
the statistical data are now skewed, but the values for the slope and the intercept are very
close to the original ones; the small dierences between old and new values are due to the
counting statistical errors in the measurements and these are also present when unknowns
are measured. The specimens used for recalibration have also been used for the calibra-
tion. The only requirement is that the recalibration specimens have count rates that are
dierent enough so that the determination of the slope is accurate enough. For each of the
calibration lines, the number of selected specimens must be at least the same as the number
of parameters to be determined. If the slope and intercept are determined, at least two
recalibration standards are required. However, if in this case three or four standards are
used, it is possible to detect gross counting artifacts (e.g., caused by mislabeled standards,
incorrect loading, etc.). The root mean square error or the correlation coecient on the
calibration line (which can only be calculated if more specimens are used than parameters
determined) has no relationship with the accuracy of the analysis. In fact, a near-perfect
correlation should be obtained. As in the case of drift correction, it is recommended to
Copyright 2002 Marcel Dekker, Inc.
Figure 4 The original calibration line, based on seven data points (h) can be reconstructed using
only two data points (in this case, top and bottom), with concentrations (x-ray values) modified to
those obtained on the calibration (j).
measure, after recalibration, the quality control specimen(s), as gross errors might thus be
identied before unknown specimen are analyzed.
E. Conclusion
Setting up a calibration to be used over extended periods of time requires considerable
amounts of work and preparation. It involves not only the selection and procurement
of standard specimens but also drift correction monitors and recalibration standards.
Also, the specimen preparation method is an essential part of the whole procedure.
The result, however, is the ability to produce quantitative results to a previously as-
sessed degree of accuracy and precision over extended periods of time with minimal
work, once the specimen preparation procedure is set up and the initial calibration is
performed.
Figure 5 Straight lines through the data points can be determined by minimizing either the sum of
the squares of the residuals DI or DW.
B. Matrix Effect
Applying Eq. (50) or (53) requires that all standards must be similar to the unknown in all
aspects considered: matrix eect, homogeneity, and so on. This would lead to the use of
standard specimens with a very limited concentration range. Such a requirement is in
disagreement with the observation that the variance on the slope factor is smaller with
increasing range: The use of standard specimens covering only a small concentration range
will lead to a calibration graph with large uncertainty on the slope and intercept. On the one
hand, this advocates the use of a set of standards with a wide range of concentrations; on
the other hand, the requirement of similarity in matrix eects tends to limit the range.
Obviously, a compromise must be made. Equation (50) is a simplication of the more
general equation describing the relationship among analyte concentration Wi , specimen
homogeneity Si , measured intensity Ii , and matrix eect Mi :
Wi Ki Ii Mi Si 55
The term specimen homogeneity also includes the grain size eect and the mineralogical
eect. These are notoriously dicult to treat mathematically; in fact, most methods de-
scribing the grain size eect rigorously assume, for example, the dispersed phase to be
perfect spheres of a given diameter or an arrangement of cubes (Bonetto and Riveros,
1985). Other methods allow more variability, but these also require a priori more in-
formation about the specimen, such as the composition of the individual granular phases,
Copyright 2002 Marcel Dekker, Inc.
the average shape and size of the phases, and so forth (Hunter and Rhodes, 1972; Lubecki
et al., 1968; Holynska and Markowicz, 1981). The fact that the specimen homogeneity is as
yet not described by a single successful method is one of the reasons that Eq. (55) is
commonly reduced to
Wi K i I i Mi 56
Fortunately, by using adequate specimen preparation methods the eect of Si between
specimens (standards as well as unknowns) can be rendered constant. This constant factor
is then absorbed by the sensitivity Ki .
As a rst approximation, the degree of variation in matrix eect between two spe-
cimens for a given analyte i can be estimated by calculating, for both compositions, the
following parameter:
mi
Ii Pi 57
ms l0 Gms li
Enhancement will generally lead to a calibration graph like curve 4 in Figure 6. The
eect of enhancement is usually smaller than that of positive or negative absorption, as
indicated by the position, relative to curve 1, of curves 2 and 3 and curve 4.
It can be shown that the behavior of the calibration curves can be explained in terms
of attenuation coecients only if absorption is the only matrix eect. Furthermore,
if monochromatic excitation is used, a single constant (calculated from attenuation
Copyright 2002 Marcel Dekker, Inc.
Figure 6 Calibration curves for binaries. Curve 1: no net matrix-effect; curve 2: net absorption of
the analytes radiation by the matrix (positive absorption); curve 3: net absorption of the analytes
radiation by the analyte (negative absorption); curve 4: enhancement of the analytes radiation by
the matrix.
coecients) suces to express the eect of one element on the intensity of another. In the
following section, various methods to deal with matrix eects will be discussed.
ms ms l0 Gms li 60
characteristic lines. They are also shifted toward longer wavelengths by an amount Dl,
which is given by
Dl 0:002431 cos c 61
0 00
where c is the angle through which the radiation is scattered: c c c and Dl is ex-
pressed in nanometers.
In the spectrometer used for the recording of the spectra in Figure 8, c is 100 .
The Compton shift, Dl, is thus 0.0029 nm for this conguration. The maxima of the
Compton peaks (at 0.0576 nm and 0.0644 nm) in Figure 8 are in good agreement with
the theoretical values: 0.0546 nm 0.0029 nm 0.0575 nm and 0.0613 nm 0.0029 nm
0.0642 nm, respectively.
The intensity of the Compton-scattered radiation is higher for shorter wavelengths
and for specimens consisting of elements with low atomic numbers. For a given wave-
length (e.g., the characteristic radiation of a tube line), the intensity of Compton scatter
decreases as the specimen consists of more and more elements with higher atomic
numbers. For specimens made up of oxides, the scattered intensity is usually so intense
that it can be measured with sucient precision in a relatively short time. On the other
hand, for specimens made up predominantly of heavier elements, such as steel and even
more so for brasses and solders, the intensity of the scattered radiation is very low (see
Fig. 8) and counting the statistical error can preclude precise analysis in a reasonable
amount of time. The most common application where this approach (or a variant
thereof) is used is in the determination of trace elements in specimens of geological
origin. This is illustrated in Figure 9 for the determination of Sr in specimens of widely
Copyright 2002 Marcel Dekker, Inc.
varying geological origin. In Figure 9a, the net count rate for SrKa is plotted against the
concentration for a large number of specimens. There is a considerable spread around
the calibration line established. The scatter is greatly reduced when the net count rate of
Figure 9 (a) Net count rate of SrKa as a function of Sr concentration for a large number of
specimens of varying geological origin. There is considerable spread around the calibration line.
(b) The ratio of the count rates of the SrKa radiation and the Compton-scattered tube line is plotted
against the concentration of Sr. The spread of the data points around the calibration line is now
much reduced compared to (a); this is especially the case for the point labeled A.
2. Internal Standard
In this method, an element is added to each specimen in a xed proportion to the original
sample. This addition has to be made to the standard samples as well as to the unknowns.
The characteristic radiation of the element added should be similar to the characteristic
radiation of the analyte in terms of absorption and enhancement properties in the matrix
considered. Such an element is called an added internal standard, or internal stan-
dard for short. In practice, the method works equally well if a pure element or a pure
compound is added, or if a solution with the internal standard element is used. If a so-
lution is used, care must be taken that the solution itself does not contain any elements
that are to be analyzed. The composition of the solution used as the additive must be
constant, otherwise it might aect the matrix eect. The intensity of the internal standard
is aected by matrix eects in much the same way as the intensity of the analyte, provided
there are no absorption edges (leading to dierence in absorption) or characteristic lines
including scattered tube lines (leading to dierence due to enhancement) between the two
wavelengths considered. Because
Wi K i I i Mi 62
for the analyte i and
Ws Ks Is Ms 63
for the internal standard s, the following ratio can be obtained by dividing Equation (62)
by Equation (63):
Ii
Kis Wi 64
Is
where
K s Ms
Kis 65
K i Mi W s
Because the same amount of internal standard is added to all specimens, Ws is essentially a
constant and can be included in the constant Kis. It should be noted that Mi (and Ms) is
not a constant over the concentration range of interest (otherwise linear calibration would
suce) but depends on the matrix elements. However, if both Mi and Ms vary in a similar
manner with the matrix elements, the ratio Mi=Ms is less sensitive to variation in the
matrix eect and, in practice, can be considered a constant. In practice, the constant Kis is
determined using linear regression. The main advantage of the internal standard method
over the scattered-radiation method is its ability to correct eectively for enhancement as
well as for absorption. It also correctsat least partiallyfor variations in density of
pressed specimens. The requirement that the intensity of the characteristic radiation of
both the analyte and the internal standard element vary in the same manner with the
Copyright 2002 Marcel Dekker, Inc.
matrix eects imposes that there should be no absorption edges and no characteristic
radiation from other elements between the measured line of the analyte and that of the
internal standard. Furthermore, ideally, the analyte should not enhance the internal
standard or vice versa. If Ka radiation is measured and if the atomic number of the analyte
is Z (with Z > 23), then, very often, the elements with atomic number Z 1 or Z 1 are
very good candidates. This assures that there are no K absorption edges and no K emission
lines of other elements between the two elements considered. The element with atomic
number Z is not enhanced by Ka radiation from an element with Z 1, but only by the
much weaker Kb radiation, whereas for elements with atomic number Z 2 and higher,
both Ka and Kb contribute to enhancement. In practice, some enhancement between the
internal standard element and the analyte or vice versa is allowed, as the concentration of
the internal standard is constant and the ratio is based on intensities. The absence of major
elements in the specimens with L absorption edges and emission lines, however, must be
checked for. The situations that must be avoided are (1) the case where a major line of a
major matrix element is between the absorption edges of the analyte and the internal
standard and (2) the case where a major absorption edge of a major matrix element is
situated between the measured characteristic lines of the analyte and the internal standard.
In the rst case, the matrix element would enhance either the analyte or the internal
standard element, but not both; in the second case, the matrix element absorbs strongly
either the radiation from the analyte or the internal standard, but not both. In both of
these cases, varying concentrations of the matrix element will lead to variable and dierent
eects on the intensities of the analyte and of the internal standard, and the ratio used in
Eq. (64) will not compensate for such events.
The method, however, has some important limitations:
The specimen preparation is made more complicated and is more susceptible to
errors.
The addition of reagents and the requirement of homogeneity of the specimen tends
to limit the practical application of the method to the analysis of liquids and
fused specimens, although it sometimes nds application in the analysis of
pressed powders.
Although the rule Z 1 or Z 1 can serve as a rule of thumb, it is quite clear that for
samples where many elements are to be quantied, a suitable internal standard
cannot be found for every analyte element. Sometimes, more elements are used
in one internal standard solution to provide suitable internal standards for
more analytes.
Also, the fact that the internal standard method is easier to apply to liquids can
generate some problems. Heavier elements (e.g., Mo) are more dicult to
determine using this method, because liquid specimens are generally not of
innite thickness for the K wavelengths of these heavier elements. In such cases,
the L line can be used, with an appropriate internal standard. The method will,
however, also provide some compensation for the eects of noninnite
thickness, especially if the wavelength of the internal standard selected is very
similar to that of the analyte line.
Theoretically, L lines of a given element can be used as internal standards for K lines
of other elements and vice versa if these wavelengths are reasonably close to each other
and neither interfering lines nor edges occur between them. In principle, the method allows
the determination of one or two elements in a specimen without requiring analysis (or
knowledge) of the complete matrix.
Copyright 2002 Marcel Dekker, Inc.
The range of concentration over which this method is suitable can be quite large, up
to 1020 % in favorable situations, but the internal standard technique is most eective at
low concentrations (or high dilutions). The method nds, for instance, application in the
determination of Ni and V in petroleum products, where MnKa is used as an internal
standard (ISO, 1995). The internal standard method allows an accurate determination of
these elements in a much wider variety of petroleum products than the method based on
linear calibration.
Figure 10 Standard addition method. The net intensity is plotted versus weight fraction of the
element added to the sample and a best-fit line is determined. The intercept of that line with the
concentration axis is Wi.
4. Dilution Methods
Dilution methods can also eliminate or reduce the variation of the matrix eect, rather than
compensating for such variation. The dilution method can be explained using Eq. (17):
mi l0
Cl0 ; li Pn Pn 17
j1 Wj mj l0 G j1 Wj mj li
where ms l0 and ms li are the mass-attenuation coecients of the specimen for the pri-
mary wavelength l0 and analyte wavelength li, respectively. Apparently, deviations from
linearity are due to variations in ms l0 and=or ms li . Enhancement is ignored at this
stage. If one adds, to the sample, D grams of a diluent (d ) for each gram of sample, the
denominator or Eq. (67) becomes
1 D
m l0 Gms li m l0 Gmd li 68
1D s 1D d
If the term D=1 Dmd l0 Gmd li is much larger than 1=1 Dms l0 Gms li ,
the factor Cl0 ; li becomes essentially a constant and variations due to varying matrix
eects between samples become negligible. This can be done in two ways.
(a) Making D=1 D large by diluting each sample by adding a large, known
amount of a diluent.
Copyright 2002 Marcel Dekker, Inc.
(b) Adding a smaller quantity of diluent than in the previous case, but with a much
larger value for md l0 Gmd li . This is called the technique of the heavy
absorber.
Both of these procedures, however, require the addition of reagents to the sample. This
can easily be done for dissolved samples, either in liquids or fused samples, but it is more
dicult for powdered samples (homogeneity!).
These methods do not eliminate the matrix eects completely, but reduce their
inuence. On the other hand, they also reduce the line intensity of the analyte; thus, a
compromise must be sought.
Dilution methods also have the advantage of reducing the enhancement eect if one
uses a nonuorescing diluent (e.g., H2O or Li2B4O7). In this case, the eect is reduced by
the fact that the concentrations of both the enhancing element and the analyte are reduced.
If the diluent contains elements whose characteristic x-rays can excite the analyte, as well
as some other matrix elements, then the contribution of those unknown quantities of
matrix elements to the total enhancement is reduced: The enhancement of the analyte by
the diluent would then be determining and can be considered to be constant.
This method allows the determination of all measurable elements in the sample, as
opposed to the standard addition method, where an addition must be made for each
element of interest.
D. Mathematical Methods
1. General
The term mathematical methods refers to those methods that calculate rather than
eliminate or measure the matrix eect.
Mathematical methods are independent of the specimen preparation in the sense that
specimen preparation is taken into account if the composition of the specimen presented to
the spectrometer has been changed (e.g., by fusion), but mathematical methods do not
prescribe the specimen preparation method as is done, for example, by the standard ad-
dition method. The actual calculation method used to convert intensities to concentrations
does not aect the choice of the specimen preparation method. The aim of the specimen
preparation is limited to the presentation to the spectrometer of a specimen that is
homogeneous (with respect to the XRF technique) and that has a well-dened, at surface
representative for the bulk of the specimen. Mathematical methods usually require
knowledge of all elements in the standard specimens and allow determination of all mea-
surable elements in the unknowns. In practice, trace elements can be neglected in the cal-
culations for the analytes present at higher concentrations, as these trace compounds are
neither subject to an important (and variable) matrix eect nor do they contribute sig-
nicantly to the matrix eect of other elements. Their concentrations are often found by
straightforward linear regression. The mathematical methods are divided in two main ca-
tegories: the fundamental parameter method and the methods using inuence coecients.
Step 4. The process, starting at Step 2, is now repeated until convergence is obtained.
Dierent convergence criteria exist. The calculation can be terminated if one of the fol-
lowing criteria is satised for all the elements (compounds) concerned:
1. The intensities, calculated in Step 2, do not change from one step to another, by
more than a present level.
2. The intensities, calculated in Step 2 agree, to within a preset level, with the
measured intensities
3. The compositions, calculated in Step 3, do not change from one step to another
by more than a present level (e.g., 0.0005 or 0.0001 by weight fraction).
One or more of these criteria might be incorporated in the program. These criteria,
however, are no guarantee that the nal result is accurate to within the level, specied in
the convergence criteria. Furthermore, especially with convergence criteria based on
concentrations (such as criteria 3), it must be realized that a convergence criterion of
0.0005 is unacceptable when determining elements at levels below 0.0005.
d. Extensions to the Method
More complex scenarios are possible. The most common one includes the calculation
of a set of inuence coecients (based on theoretical calculations) to obtain a com-
position quickly, close to the nal result. Next, the fundamental parameter method is
applied (Rousseau, 1984a). This should yield faster convergence in terms of compu-
tation time, because the calculation by inuence factors of the preliminary composition
is very fast. This method reduces the number of evaluations of the fundamental param-
eter equation.
Also, the dierent programs available dier quite markedly in their treatment of
the intensities of the standards measured. Some programs use a weighting of the stan-
dards, stressing the standard(s) closest (in terms of intensity) to the unknown (Criss,
Copyright 2002 Marcel Dekker, Inc.
1980a). Such programs use a dierent calibration for each unknown specimen. The
unknown specimen dictates which standards will be given a high weighting factor and
which standards will be used with less weighting. Other programs use all standards with
equal weighting.
e. Typical Results
Early results of the fundamental parameter method on stainless steels are given by Criss
and Birks (1968). The average relative dierences between x-ray results and certied values
were about 34%. Later, Criss et al. (1978) reported accuracies of about 1.5% relative for
stainless steels, using a more accurate fundamental parameter program. Typical results for
tool steel alloys are given in Table 3. Often, the fundamental parameter method is con-
sidered to be less accurate than an inuence coecient algorithm. This is caused primarily
by the fact that the fundamental parameter method has extensively been used and
described as a method that allows quantitative analysis with only a few standards. This
is obviously an advantage, but it does not imply that fundamental parameter methods
cannot be used in combination with many standards similar to the unknown. As a matter
of fact, on several occasions and with a variety of matrices, the authors have obtained
results of analysis with an accuracy similar to that of inuence coecient algorithms when
using the same standards in both cases.
f. Factors Aecting Accuracy
The accuracy of the nal results is determined by the following:
The measurement
The specimen preparation
The physical constants used in the fundamental parameter equation
The limited description of the physical processes that are considered in the
fundamental parameter equations
The standard and the calibration
In the following discussion, the eect of measurements and specimen preparation
will not be considered.
Table 3 Analysis of Seven Tool Steels with a Fundamental Parameter Program [XRF11, from
CRISS SOFTWARE, Largo, MD (Criss, 1980)]
Element Minimum conc. (%) Maximum conc. (%) Standard deviation (%)
Incidence and exit angles. The incidence angle in most wavelength-dispersive (WD) and
energy-dispersive (ED) spectrometers is, in fact, dened by a relatively wide cone with a dierent
intensity at the boundaries compared to the center. This incident cone is neglected and the
incident radiation is considered parallel, along a single, xed direction. A similar observation
holds for the exit angle. The eect is far less pronounced if diraction from a plane crystal
surface is used for dispersion, as is done, for example, in most sequential WD spectrometers.
This has been studied to some extent by Muller (1972). To our knowledge, none of the fun-
damental parameter programs available takes this eect into account. Its inuence, however, is,
to some extent, compensated for by calibration with standard specimens.
Spectrum of incident beam. The spectrum of the incident beam from an x-ray tube
spectrum requires more attention. Parts of the primary spectrum might excite an element B
that, in turn, excites element A very eciently. In such cases, this enhancement may make
the intensity of element A sensitive to small errors in the tube spectrum representation,
which would not be compensated for if the pure A was used for calibration. This can arise,
for example, in the analysis of silicazirconia specimens with a Rh tube (Criss, 1980b).
Pure silica is relatively insensitive to the intensity of the characteristic K lines of Rh. In
combination with Zr, however, the situation is dierent. Indeed, the RhK lines are strongly
absorbed by Zr. Zr then emits K and L lines that enhance Si. As a result, the Si intensity is
more sensitive to the RhK lines in SiO2ZrO2 mixtures than it is in pure SiO2. Tube spectra
have been calculated using, for example, the algorithm of Pella et al. (1985).
Mass-attenuation coecients. There are several compilation of mass-attenuation
coecients, published in the literature. A continuing eort to compile the most compre-
hensive table has been undertaken by the National Institute of Standards and Technology
(formerly National Bureau of Standards), Gaitherburg, MD.
When selecting a table of mass-attenuation coecients for use in a fundamental
parameter program, the following question must be addressed: Does the table cover all the
analytical needs? (In practice, does it cover the complete range of interest from the longest
wavelength considered, down to the excitation potential of the tube?)
The analyst should be aware that the use of formulas to generate mass-attenuation
coecients can lead to values that can be signicantly dierent from the corresponding
table values.
Presently, for applications in XRF, the complications of McMaster et al. (1969),
Heinrich (1966), Leroux and Thinh (1977), or Veigele (1974) are most often used. A short
discussion on the agreement between some of these compilations has been presented by
Vrebos and Pella (1988). A more recent compilation has been published by de Boer (1989).
Fluorescence yields. A comprehensive reference to uorescence yields, including
CosterKronig transitions, can be found in the work of Bambynek et al. (1972). (see
also Chapter 1, and Appendix VI).
Copyright 2002 Marcel Dekker, Inc.
Absorption jump ratios. These can be derived from the tables of attenuation coef-
cients.
Ratios of dierent uorescent lines within a family. Data for the K spectra can be
found in the work by Venugopalo Rao et al. (1972) (see also Chapter 1).
Wavelengths of absorption edges and emission lines. A comprehensive table was
published by Bearden (1967) and is also presented in the appendices to Chapter 1. Because
attenuation coecients are wavelength dependent, an error in a wavelength of any char-
acteristic line will automatically lead to a bias in the corresponding attenuation coecients.
h. Limited Physical Processes Considered
The fundamental parameter equation [Eq. (16)] does not consider all physical processes in
the specimens. Three of the most obvious that are missing are described here.
Tertiary uorescence. Although the formula for tertiary uorescence has been de-
rived by, for example, Shiraiwa and Fujino (1966) and Pollai and Ebel (1971), it is not
included in most fundamental parameter programs. Usually, the tertiary uorescence eect
is considered small enough to be negligible. Shiraiwa and Fujino (1967, 1974) have pre-
sented data showing a maximum contribution of tertiary uorescence of about 3% relative
to the total intensity of Cr in FeCrNi specimens. Therefore, even in FeCrNi specimens
whose characteristic lines and absorption edges are ideally positioned relative to one an-
other to favor enhancement, the eect of tertiary uorescence is quite limited. Higher-order
enhancement is also possible, but it is even less pronounced than tertiary uorescence.
Scatter. Other processes not considered in most of the fundamental parameter
methods are coherent and incoherent scatter of both the primary spectrum and the
uorescent lines. This is usually justied by pointing out that the photoelectric eect is, by
far, the major contribution to the total absorption. It is believed that the contribution by
scattered photons to the excitation of characteristic photons is negligible. However, in
some cases the scattered primary spectrum may have a considerable inuence, as illu-
strated earlier in this chapter. The equations describing the contribution of scatter to
uorescent intensity have been derived by Pollai et al. (1971). These equations have ob-
viously many similarities to those for secondary uorescence.
Photoelectrons. The processes that are probably the must unknown in the funda-
mental parameter method are related to the contributions of the photoelectrons and of the
Auger electrons that are produced as a result of absorption of the primary and uorescent
x-ray photons. These electrons have sucient energy to excite other atoms and thus create
additional uorescence. This is especially important in the case of low-atomic-number
elements, as has been described by Mantler (1993) and has been illustrated, for example,
by Kaufmann et al. (1994).
i. Standards and Calibration
The use of good standards (similar to the unknown) will almost always lead to more
accurate results, compared to a situation where the standards used have a widely dierent
composition from that of the unknown. This is because most of the uncertainties, caused
by inaccuracies in the physical constants, cancel. The degree of similarity between stan-
dards and unknown has an important eect on the accuracy of the analysis.
for a multielement specimen, with n being the total number of elements or compounds. In
most of the inuence coecient algorithms, one element is eliminated from the summation
{i.e., one inuence coecient is used when dealing with binaries, as in Eq. (70), and n1
coecients deal with a specimen consisting of n compounds [Eq. (71)]}. In Eq. (71) this is
explicitly indicated by the j 6 e under the summation sign. The eliminated compound e can
be any of the ones present in the specimens; however, most authors eliminate the analyte.
The expression for Mi is then used as follows:
2 3
6 X
n 7
Wi R i 6
4 1 mij Wj 7
5 72
j1
j6e
which links the relative intensity and the inuence coecients to the composition of the
specimen. The relative intensity Ri for a given analyte is dened as the ratio of the net
measured intensity Ii in the specimen and the intensity that would have been measured on
the pure analyte Ii under identical conditions:
Ii
Ri 73
Ii
In practice, the relative intensity is often derived indirectly from measurements on stan-
dards, and the pure element (or compound) is not required. Equation (56) can be rewritten
in terms of Ri :
Wi R i Mi 74
Ii M i
Ii 77
Wi
where Mi is calculated and Ii is measured. Mi can be calculated using Eq. (75), where Ri is
calculated from theory for the standard specimen of known composition. Therefore, al-
though many inuence coecient algorithms are presented using the format of Eq. (72),
involving the relative intensity, there is no real need to perform measurements of the pure
analyte, as Eq. (72) can be written as
2 3
6 X
n 7
Wi K i I i 6
41 mij Wij 7
5 78
j1
j6e
where Ki is then determined during the calibration phase. Furthermore, if the background
is not subtracted, Eq. (78) can be written as
Copyright 2002 Marcel Dekker, Inc.
2 3
6 X
n 7
Wi Bi Ki Ii 6
41 mij Wij 7
5 79
j1
j6e
where the summation covers all n elements (or compounds) in the specimen, except the
analyte itself. Hence, there are n1 terms in the summation. This is common to all cur-
rently used algorithms. Most of the algorithms, developed earlier [such as Shermans
(1953) and Beattie and Brisseys (1954)], used to have n terms, rather than n1, for spe-
cimens with n elements.
Equations (80a), (80b), and (80c) are linear equations in the concentrations of the
elements WA , WB , and WC , respectively. Note that there are only two coecients for each
analyte element. Consider, for example, the rst equation of the set 80, namely Eq. (80a):
Element A is the analyte and its concentration is equal to the relative intensity RA , mul-
tiplied by the matrix correction factor (1 aAB WB aAC WC ). This matrix correction
factor has only two coecients: one (aAB ) to describe the eect of element B on the in-
tensity of A and, similarly, one to describe the eect of element C on the intensity of A.
The value of the coecient aAA , which would correct for the eect of A on its own in-
tensity (sometimesbut incorrectlyreferred to as self-absorption) is zero. Similarly, aBB
and aCC are also zero. The eect of A on A, however, is taken into account, as will be
shown in the next subsection.
Calculation of the coecients. Lachance and Traill also showed that the inuence
coecients, ij can be calculated for monochromatic excitation by photons with wave-
length 0 (assuming absorption only) from the expression
mj l0 cscc0 mj li cscc00
aij 1 82
mi l0 cscc0 mi li cscc00
Wi Ri 1 aij Wj 83
to
Wi =Ri 1
aij 84
Wj
yields an expression that can be used to obtain aij , based on the composition of the
binary and the relative intensity Ri . The drawbacks associated with this method are as
follows:
1. The calculation of Ri requires the measurements of the intensity on the pure i
(element or compound). This could lead to large errors if the intensity of i in the
binary is much lower than that of the pure, due to, for example, nonlinearity of
the detectors.
2. The pure elements (or compounds) are not always easily available or could be
unsuitable to present to the spectrometer (e.g., pure Na or Tl).
3. Equation (84) is very prone to error propagation when Wi is close to 1. The
numerator is then a dierence between two quantities of similar magnitude, and
the denominator is then close to zero, magnifying the errors.
4. Also, the availability of suitable binary specimens can present problems: Some
alloys tend to segregate and homogeneous specimens are then dicult to
obtain.
The coecients aij can also be calculated from theory: Calculate Ri for the binary
with composition Wi ; Wj rather than obtain it from measurements and substitute in Eq.
(84). This method eliminates drawbacks 1, 2, and 4. However a better methodwithout
the problem associated to error propagationis to use Eq. (75) directly with the values for
Ri and Wi . Lachance (1988) has also presented methods to calculate the values of the
coecients from theory.
Copyright 2002 Marcel Dekker, Inc.
The LachanceTraill algorithm assumes the following:
1. The inuence coecients can be treated as constants, independent of
concentration; this limits the concentration range in cases where the matrix
eects change considerably with composition.
2. The inuence coecients are invariant to the presence and nature of other
matrix elements. So aFeCr , determined for use in FeCrNi ternary specimens, is
the same as aFeCr in FeCrMoWTa or FeCr specimens.
b. The de Jongh Algorithm
Formulation. In 1973, de Jongh proposed an inuence coecient algorithm (de
Jongh, 1973), based on fundamental parameter calculations. The general formulation of
his equation is
" #
X
n
Wi E i R i 1 aij Wj 85
j1
j6e
Note that in order to obtain the concentration of elements A and B, the concentration of
C, WC , is not required. This is dierent from Lachance and Traills algorithm: In order to
calculate the concentration of A, using Eq. (80a), the concentrations of both B and C are
required. If the user is not really interested in element C (e.g., element C is iron in stainless
steels), Eq. (86c) need not be considered and the analysis of the ternary specimen can be
done by measuring RA and RB and solving Eqs. (86a) and (86b).
Calculation of the coecients. de Jongh also presented a method to calculate the
coecients from theory. The basis is an approximation of Wi =Ri by a Taylor series
around an average composition:
Wi
Ei di1 DW1 di2 DW2 din DWn 87
Ri
where Ei is a constant given by
Wi
Ei 88
Ri average
and
DWi Wi Wi;average 89
dij are the partial derivatives of Wi =Ri with respect to concentration:
@Wi =Ri
dij 90
@Wj
or
DWe DW1 DW2 DWn 93
one element e can be eliminated. The resulting equation is similar to Eq. (87), but has only
n1 terms:
Wi
Ei bi1 DW1 bi2 DW2 bin DWn 94
Ri
with
bi1 di1 die 95
for all bij except bie , which is equal to zero. Equation (94) has n1 terms, but they are still
in DW, rather than W. Transformation of DW to W is done by substituting Eq. (89) in
Eq. (94):
Wi X n X n
Ei bij Wj;average bij Wj 96
Ri j6e j6e
indicating that the weight fraction of the analyte is calculated from its relative intensity, a
matrix correction term, and the matrix correction term for the average composition (which
for a given composition is a constant). The value for Mi,average can be calculated using the
inuence coecients calculated and taking the composition (Wi, Wj) equal to the average
Copyright 2002 Marcel Dekker, Inc.
Table 4 Analysis of Stainless Steels with Theoretical Inuence Coefcients (de Jongh)
composition. This equation is very similar at rst sight to the LachanceTraill equation
[Eq. (81)], except for the term Mi,average.
Typical results. Tables with de Jonghs coecients have been used for a wide
variety of materials, including high-temperature alloys, brass, solders, cements, glasses,
and so forth. An example for stainless steels is given in Table 4. Results are shown for
Mn, Ni, and Cr only. For these analytes, the matrix eects are the most important.
Other elements, such as Si, P, S, and C, are also present at trace level. The coecients are
calculated at a given composition [see Eq. (91)]. The practical range of concentration over
which these coecients yield accurate results varies from 5% to 15% in alloys to the whole
range from 0% to 100% in fused oxide specimens.
Comparison between the algorithms of de Jongh and LachanceTraill. The following
points can be noted:
1. The basis of the de Jongh algorithm is a Taylor series expansion, around an
average (or reference) composition. The values of the coecients calculated
depend on this composition.
2. De Jongh can eliminate any element; Lachance and Traill eliminate the analyte
itself: aij is zero. De Jongh eliminates (i.e., xes the coecient to zero) the same
element for all analytes. Eliminating the base material (e.g., iron in steels) or the
loss on ignition (for beads) generally leads to smaller numerical values for the
coecients and avoids the necessity to determine all elements.
3. De Jonghs coecients are calculated at a given reference composition. They are
composition dependent and take into account all elements present. A coecient
aij represents the eect of element j on the element i in the presence of all other
elements: They are multielement coecients rather than binary coecients. This
is seen in Table 5, where the values of the coecients aCrCr and aCrNi are shown
for several dierent specimens, but with identical concentrations for the analyte
Table 5 Values for Inuence Coefcients aCrCr and aCrNi, Calculated According to the Algorithm
of de Jongh, for Selected Specimens
where the summation over j has n1 terms (all n elements, except i), and the summation
over k has (n2)=2 terms (all n elements, except the analyte i and element j; furthermore, if
aAj k is used, then aAkj is not). For a binary specimen, Eq. (99) reduces to
WA RA 1 a0AB WB 100
with
a0AB aAB aABB WB 101
clearly showing that the inuence coecient a0AB varies linearly with composition (i.e.,
WB ). For binaries, WA 1 WB ; hence, Eq. (101) can also be rearranged to
a0AB aAB aABA WA 102
Equations (101) and (102) are, at least theoretically, identical. It has been shown, however,
that Eq. (102) is preferable to Eq. (101) if specimens with more than two elements (or
compounds) are analyzed (Lachance and Claisse, 1980). This will be discussed in more
detail in Sec. V.D.6. Note that the value of aAB in Eq. (101) is dierent from its value in
Eq. (102).
Cross-product coecients. For a ternary specimen, the ClaisseQuintin algorithm
can be written
WA RA 1 aAB WB aABB W2B aAC WC aACC W2C aABC WB WC 103
The terms
aAB WB aABB W2B
and
aAC WC aACC W2C
where only one coecient is used for each interfering element. The coecients Aij are used
for cases where absorption is the dominant eect. In this case, the coecient Bik is taken
equal to zero. If, for a given analyte, all Bik coecients are zero, Eq. (104) reduces to the
LachanceTraill expression. When enhancement by element k dominates, a Bik coecient
is used. The corresponding Aij coecient is then taken equal to zero. Hence, the total
number of terms in both summations is n 1.
The correction factor for enhancement by element k can be rewritten as
Bik
aik 105
1 Wi
showing that aik varies with concentration in a nonlinear fashion. The algorithm is very
popular when analyzing stainless steels and steels in general.
Among the disadvantages of the RasberryHeinrich algorithm are the following:
aij2 Wm
mij aBij aij1 106
1 aij3 1 Wm
with
Wm 1 Wi 107
Wm is the concentration of all matrix elements. It has been shown by Lachance and Claisse
(1980), as well as by Tertian (1976), that variable binary coecients must be expressed in
terms of Wm (or 1Wi). For binary specimens, Eq. (106) can be rewritten using Wj for Wm
and Wi for (1Wm):
aij2 Wj
mij aBij aij1 108
1 aij3 Wi
For specimens with more than two compounds, however, the dierence between Eqs. (106)
and (108) becomes clear. The value for the inuence coecient mij is approximated over
the complete concentration range for the binary by the function in Eq. (106), which relies
on three coecients only. The excellent agreement between the true inuence coecient
mij and the approximation of Eq. (106) is shown in Figure 12 for Fe in FeNi (severe
enhancement) and for Fe in FeCr (pronounced absorption).
For multielement specimens, cross-product coecients aijk are used to correct for the
crossed eect, similar to Eq. (99). The general equation for a multielement specimen is
" #
X n X n X n
aij2 Wm
Wi R i 1 aij1 Wj aijk Wj Wk 109
j6i
1 aij3 1 Wm j6i k6i;k>j
where the summation over j has n 1 terms (all n elements, except i) and the summation
over k has (n 2)=2 terms (all n elements, except the analyte i and element j; furthermore if
aijk is used, then aikj is not).
Vrebos and Helsen (1986) have published some data on this algorithm, clearly
showing the accuracy of the algorithm, using theoretically calculated intensities. The use of
Copyright 2002 Marcel Dekker, Inc.
Table 6 Composition of the Specimens Used for the Calculations of the
Coefcients for Lachances Three-Coefcient Algorithm, in Weight Fraction
Specimen No. Wi Wj Wk
theoretically calculated intensities has the advantage that it avoids errors due to specimen
preparation and measurement errors associated with actual measured data. Pella and co-
workers (1986) have presented a comparison of the algorithm with several others and with
a fundamental parameter method using experimental data.
Calculation of the coecients. The coecients aij1 , aij2 , and aij3 are calculated using
fundamental parameters at three binaries i j. The cross-product coecients are calculated
from a ternary. The compositions of the specimens concerned are listed in Table 6. The
specimens referred to in Table 6 are hypothetical specimens. The intensities are calcu-
lated from fundamental parameters and require no actual measurements on real specimens.
Step 1. Calculate the relative intensity Ri for the rst composition in Table 6. If the
analysis of interest has more than three elements, then the system is divided in combi-
nations of three elements i, j, k at a time. The analyte is element i, and j and k are two
interfering elements. If the system considered is with compound phases, such as oxides,
then the compositions in Table 6 are assumed to be for the oxides.
Step 2. Using Eq. (84), the corresponding inuence coecient aBij can be calculated.
Step 3. For this composition, Wm 1 Wi Wj 0:001, which is small enough to
be considered zero. Hence, Eq. (106) reduces to
aBij aij1 110
aBij has been calculated in Step 2, so aij1 can be computed.
Step 4. Calculate the intensity for the second composition of Table 6 and use Eq. (84)
to calculate aBij . In most cases, this value will be dierent from the one found in Step 2
because the compositions involved are dierent.
Step 5. 1 Wm Wi 0:001 is small enough to be considered zero; hence, Eq. (106)
reduces to
aBij aij1 aij2 111
aij1 and aBij are known so aij2 can be calculated.
Step 6. Calculate the intensity for the third composition of Table 6 and use Eq. (84)
to calculate aBij . In most cases, this value will be dierent from the one found in Step 2 or 4
because the compositions involved are dierent.
Step 7. Using Wm 1 Wi 0:5 Wi , Eq. (106) reduces to
aij2 0:5
aBij aij1 112
1 aij3 0:5
SiKa(FeKa) SiKa(FeLa)
mation is shown in Figure 13 for the FeNi and the FeCr binaries. The agreement for the
straight line is obviously not as good as with the COLA algorithm, especially in those cases
where the value of the true inuence coecient varies markedly, as is the case for Fe in
FeCr (absorption). Equation (117) has been compared to the three-coecient algorithm of
Lachance by Vrebos and Helsen (1986). They show that the accuracy is somewhat less
than for Lachances method, but for most practical purposes, the Rousseau algorithm
should give acceptable results.
Calculation of the coecients. Rousseau has shown that the fundamental para-
meter equation can be rearranged to
" #
Xn
Wi R i 1 aij Wj 118
j61
and he also proposed a method to calculate the a coecients directly from fundamental
parameters, without calculating the intensity rst (Rousseau, 1984a). As a matter of fact,
Rousseau rst calculates the coecients for a given composition and then calculates the
intensity, using Eq. (118). The coecients in Eq. (116) are calculated in a way very similar
to the method described in Sec. V.D.5.d. The compositions involved are given in Table 8.
The specimens referred to in Table 8 are hypothetical specimens. The intensity is cal-
culated from fundamental parameters and requires no actual measurements on real spe-
cimens. For the rst two binaries of Table 8, the inuence coecient is calculated [symbol
aij(0.20,0.80) and aij(0.80,0.20), respectively]. Then the corresponding values are sub-
stituted in Eq. (117):
aij 0:20; 0:80 aij1 aij2 0:80 119a
the relative intensity of FeKa as a function of the weight fraction of Fe in FeNiCr spe-
cimens. There is considerable spread of the intensity of FeKa, even for a constant weight
fraction of Fe. For specimens with a weight fraction of 0.10 Fe, the relative intensity of
FeKa varies between 0.036 and 0.16 (points marked 1 and 2 in Fig. 14). This is due to the
Copyright 2002 Marcel Dekker, Inc.
Table 8 Composition of the Specimens Used for the Calculations of the
Coefcients for the Linear Approximation According to Rousseaus Algorithm,
in Weight Fraction
Specimen No. Wi Wj Wk
rather dierent eect that Ni and Cr have on Fe:Cr is an absorber for FeKa radiation,
whereas the NiK radiation can enhance FeK radiation through the process of secondary
uorescence (enhancement). For these specimens, the matrix eect MFe can be calculated
from Eq. (75). The total matrix eect on Fe, MFe(FeNiCr), in these specimens, at a xed
Fe concentration of 0.1, for example, varies from 0.63 (for 0.1 Fe in FeNi, point 2) to 2.8
(for 0.1 Fe in FeCr, point 1).
Now, the problem is how to calculate the matrix eect in this case, based on in-
uence coecients. Assume a specimen with the following composition: WFe 0.1,
Figure 14 The relative intensity of FeKa as a function of the concentration of Fe, in the presence
of Ni and Cr. For every given weight fraction of Fe, the highest value of the intensity is obtained for
the binary system FeNi (enhancement), whereas the lowest intensity is for the binary FeCr
(absorption). The intermediate values are for ternary specimens, where the concentrations of Ni and
Cr vary in steps of 0.1 weight fraction. At a weight fraction of Fe 0.7, the four data points labeled
a, b, c, and d represent the following specimens (WFe, WNi, WCr): a (0.7, 0.0, 0.3), b (0.7, 0.1,
0.2), c (0.7, 0.2, 0.1), and d (0.7, 0.3, 0.0). Points labeled 1 and 2: see text. Experimental
conditions: W tube at 45 kV, 1-mm Be window, incidence and take-off angles 63 and 33 ,
respectively.
ij aij tijk Wk
aM 126
B
where tijk is a coecient expressing the eect of element k on the inuence coecient aM
ij .
Similarly,
ik aik tikj Wj
aM 127
B
and is based on cross-product coecients to correct for crossed eect introduced by the
use of binary inuence coecients. The use of the cross-product coecients is not
mandated by the concentration range to be covered (the binary coecients as calculated
by, for example, the algorithm of Lachance are more than adequate) but is a consequence
of the use of binary coecients.
7. Application
In Secs.V.D.4 and V.D.5, several inuence coecient algorithms have been discussed.
Application of the resulting equations for calibration and analysis will be discussed here
and is equally valid for any of the inuence coecient algorithms.
a. Calibration
Step 1. It is assumed that the coecients have been calculated from theory, for
example, using Eq. (84) or (97).
Copyright 2002 Marcel Dekker, Inc.
Step 2. Calculate the matrix correction term [the square brackets in Eq. (81), (85),
(99), (104), and (109)] for all standard specimens and for a given analyte. The coecients
are known (Step 1), and for standard specimens, all weight fractions Wi and Wj are
known.
Step 3. Plot the measured intensity of the analyte, multiplied by the corresponding
matrix correction term against analyte weight fraction. Then, determine the best line,
Wi Bi Ki Ii 1 132
by minimizing DWi (see Sec. V.A). Note that Eq. (132) is more general than Eq. (50),
which does not correct for matrix eects. This process is repeated for all analytes. Other
methods are also feasible. The most common variant is the one where
W
Bi Ki Ii 133
1
is used. This is nearly equivalent to Eq. (132) but with brackets:
Wi Bi Ki Ii 1 134
The term Bi Ki Ii is related directly to the relative intensity Ri. Corrections for line
overlap should only aect this term.
b. Analysis
For each of the analytes, a set of equations has to be solved for the unknown Wi, Wj and
so forth. If the matrix correction term used is the one according to Lachance and Trail
[Eq. (81)] or de Jongh [Eq. (85)], then the set of equations can be solved algebraically (n
linear equations with n unknowns for Lachance and Traill and n1 equations with n1
unknowns for de Jongh). Mostly, however, an iterative method is used. As a rst estimate,
one can simply take the matrix correction term equal to 1. This yields a rst estimate of the
composition Wi, Wj, and so on. This rst estimate is used to calculate the matrix cor-
rection terms for all analytes. Subsequently, a new composition estimate can be obtained.
This process is repeated until none of the concentrations changed between subsequent
iterations by more than a preset quantity.
If the matrix correction is done using algorithms which use more than one coecient
{e.g., Claisse and Quintin [Eq. (99)] or Rasberry and Heinrich [Eq. (104)]}, then the
equations are not linear in the unknown concentrations and an algebraic solution is not
possible. An iterative method, such as described earlier can be used.
where aij represents the inuence coecient of element j on the analyte i and ti is the time
(in s) required to accumulate a preset number of counts. The constants aij are determined
from measurements on specimens with known composition. Determination of the com-
position of an unknown involves the solving of the above set of linear equations
[Eq. (135)]. This set, however, is homogeneous: Its constant terms are all equal to zero. So,
only ratios among the unknown Wi can be obtained. In order to obtain the weight frac-
tions Wi, an extra equation is required. Sherman proposed using the sum of all the weight
fractions of all the elements (or components) in the specimen, which ideally, should be
equal to unity. For a ternary specimen,
WA WB WC 1 136
Using Eq. (136), one of the equations in the set of Eqs. (135) can be eliminated. The
solution obtained, however, is not unique: For a ternary, any one of the three equations
can be eliminated. This yields three dierent combinations. Furthermore, any of the
three elements can be eliminated in each of the combinations. Hence, a total of 363 9
dierent sets can be derived from Eqs. (135) and (136), and each of these sets will
generate dierent results. In general, the algorithm yields n2 dierent results for a system
with n elements or compounds. This is clearly undesirable, because it is hard to de-
termine which set will give the most accurate results. Another disadvantage is the fact
that the sum of the elements determined always equals unity, even if the most abundant
element has been neglected. Furthermore, the numerical values of the coecients de-
pend, among other parameters such as geometry and excitation conditions, also on the
number of counts accumulated. Nonquantiable parameters, such as reectivity of the
diracting crystal used in wavelength-dispersive spectrometers, or tube contamination
will also aect the value of the coecients. The coecients determined on a given
spectrometer cannot be used with another instrument; they are not transferable.
The other algorithms discussed use some form of a ratio method: The Lachance and
Traill algorithm, for example, uses relative intensities. The measurements are then done,
relative to a monitor; this reduces, or eliminates, the eect of such nonquantiable
parameters.
Copyright 2002 Marcel Dekker, Inc.
b. The Algorithm of Lucas-Tooth and Price
Lucas-Tooth and Price (1961) developed a correction algorithm, where the matrix eect
was corrected for, using intensity (rather than concentration) of the interfering elements.
The equation can be written as
" #
X n
Wi Bi Ii k0 kij Ij 137
j6i
where Bi is a background term and k0 and kij are the correction coecients. A total of
n 1 coecients have to be determined, requiring at least n 1 standards. Usually,
however, a much larger number of standards is used. The coecients are then determined
by, for example, a least-squares method. The corrections for the eect of the matrix on the
analyte are done via the intensities of the interfering elements; their concentrations are not
required. The method assumes that the calibration curves of the interfering elements
themselves are all linear; the correction is done using intensities rather than concentra-
tions. The algorithm will, therefore, have a limited range. Its use will be limited to ap-
plications where only one or two elements are to be analyzed (it still involves
measurements of all interfering element intensities) and where a computer of limited
capabilities is used (although calculation of the coecients involves much more compute
capabilities than the subsequent routine analysis of unknowns).
The advantages of the method are as follows:
The method is very fast, because the calculation of composition of the unknowns
requires no iteration.
Analysis of only one element is possible; this requires, however, the determination of
all relevant correction factors.
Very simple algorithm, requiring very little calculation.
c. Algorithms Based on Concentrations
Algorithms similar to Eq. (137) have been proposed, using corrections based on con-
centrations rather than intensities. The values of the coecients were then to be derived
from multiple-regression analysis on a large suite of standards. The main aim was to
obtain correction factors that could be determined on one spectrometer and used, without
alteration, on another instrument. In practice, the coecients still have to be adjusted
because of the intimate and inseparable entanglement of spectrometer-dependent factors
with matrix eects. Furthermore, compared to the algorithms based on intensities, some of
the advantages of the latter are not retained: A calibration for all elements present is now
required, calculation of the composition of unknowns requires iteration, and so forth.
In principle, methods based on theoretically calculated inuence coecients are re-
commended.
VI. CONCLUSION
Among the advantages of XRF analysis are the facts that the method is nondestructive
and allows direct analysis involving little or no specimen preparation. Analysis of major
and minor constituents requires correction for matrix eects of variable (from one spe-
cimen to another) magnitude. If the matrix varies appreciably from one specimen to the
next, then even the intensity of elements present at a trace level can be subject to matrix
eects and a correction is required. Several methods for matrix correction have been
Copyright 2002 Marcel Dekker, Inc.
described. Each of these methods have their own advantages and disadvantages. These, by
themselves, do not generally lead to the selection of best method. The choice of the
method to use is also determined by the particular application.
From the previous sections, it may appear that the mathematical methods are more
powerful than the compensation methods. Yet, if only one or two elements at a trace level
in liquids have to be determined, compensation methods (either standard addition or the
use of an internal standard) can turn out to be better suited than, for examples, rigorous
fundamental parameter calculations. Compensation methods will correct for the eect of
an unknown, but constant, matrix. Also, they do not require the analysis of all con-
stituents in the specimen. The mathematical methods (fundamental parameters as well as
methods based on theoretical inuence coecients), on the other hand can handle cases in
which the matrix eect is more variable from one specimen to another. In this respect, they
appear to be more exible than the compensation methods, but they do require more
knowledge of the complete matrix. All elements contributing signicantly to the matrix
eect must be quantied (either by x-ray measurement or by another technique) even if the
determination of their concentrations is not required by the person who submits the
sample to the analyst. Once a particular algorithm is selected, it is customary to use for all
analytes. However, it must be stressed that this is not a requirement. There is only one
requirement for adequate matrix correction: Each analyte should be corrected adequately,
by whatever method.
If complete analysis (covering all major elements) is required, the analyst has the
choice between the fundamental parameter method and algorithms, based on inuence
coecients. Commonly, fundamental parameter methods are (or were) used in research
environments rather than for routine analysis in industry. This choice is more often made
on considerations such as the availability of the programs and computers than on dif-
ferences in analytical capabilities. Inuence coecient algorithms tend to be used in
combination with more standards compared to fundamental parameter methods, because
their structure and simple mathematical representation facilitates interpretation of the
data (establishing a relationship between concentration and intensity, corrected for matrix
eect). The nal choice, however, has to be made by the analyst.
REFERENCES
Anderson CH, Mander JE, Leitner JW. Adv X-Ray Anal 17:214, 1974.
Australian Standard 2563-1982, Wavelength Dispersive X-ray Fluorescence Spectrometers
Methods of Test for Determination of Precision. North Sydney, NSW: Standards Association
of Australia, 1982.
Bambynek W, Crasemann B, Fink RW, Freund HU, Mark H, Swift CD, Price RE, Venugopala
Rao P. Rev Mod Phys 44:716, 1972.
Bearden JA. Rev Mod Phys 39:78, 1967.
Beattie HJ, Brissey RM. Anal Chem 26:980, 1954.
Bonetto RD, Riveros JA. X-Ray Spectrom 14:2, 1985.
Claisse F, Quintin M. Can Spectrosc 12:129, 1967.
Criss JW. Adv X-Ray Anal 23:93, 1980a.
Criss JW. Adv X-Ray Anal 23:111, 1980b.
Criss JW, Birks LS. Anal Chem 40:1080, 1968.
Criss JW, Birks LS, Gilfrich JV. Anal Chem 50:33, 1978.
de Boer DKG. Spectrochim Acta 44B:1171, 1989.
de Jongh WK. X-Ray Spectrom 2:151, 1973.
Bertin EP. Principles and Practice of X-Ray Spectrometric Analysis. 2nd ed. New York: Plenum
Press, 1975.
Jenkins R, Gould RW, Gedcke D. Quantitative X-Ray Spectrometry. 2nd ed. New York: Marcel
Dekker, 1995.
Tertian R, Claisse F. Principles of Quantitative X-Ray Fluorescence Analysis. New York: Wiley,
1982.