Vous êtes sur la page 1sur 10

Tutorial 6.1 Determination of Measurement Uncertainty for Radiochemical Analysis Slide 1.

. Determination of Measurement Uncertainty for Radiochemical Analysis In this module we will examine how the uncertainty in radiochemical measurements is calculated. Slide 2. Learning Objectives This module will cover the basics of determining uncertainty of radiochemical measurements. You will be able to: Define the terms accuracy, precision, standard uncertainty, and coverage factor as they apply to statistics. Identify measurement parameters that will follow the normal distribution. Calculate the counting uncertainty given the number of counts or the count rate and count time. Identify the various measurements that are made in radiochemical analyses that factor into the final combined uncertainty. Calculate a critical level, MDC, and SDWA detection limit based on counting parameters and sample specific parameters.

The key concepts that will be emphasized in this module are: Every measurement has an uncertainty associated with it that can either be calculated or estimated. The uncertainty due to radioactive measurements can be calculated knowing the number of counts and the count interval. The sample critical level is the determining factor for whether a sample has detectable activity. The determination of detectability is based in part on the tolerable error rate that is selected.

Slide 3. Measurement Results as Random Variables Whenever any kind of measurement is made, there is uncertainty of the measurement with respect to what is the true value. Suppose that a measurement is repeated By the same person, A different person, Using a different instrument, or Using a different part of the same sample.

For each of these four different instances you would likely get a slightly different result. That is because any time a measurement is repeated, even by the same person using the same instrument on the same material; the distribution of the final measured values will vary in a manner that can be described probabilistically. For example, draw a line, and then use a ruler to measure its length. Then give the ruler and the paper with the line to another person to make the same 1

measurement. The estimate they make of the line length using that ruler will vary slightly from the original measurement you made. The things that affect the difference in measurement are interpolation of the markings on the ruler, visual acuity of the observer, variation in the temperature of the room, etc. All measurements suffer from these types of variables and others as well. These cause our measured values to differ slightly each time we make them. There are several terms used to describe the probabilistic distribution of results. These are the mean, the variance, standard deviation, and others. The key concept to remember is that no matter what the measurement is, there is an uncertainty associated with it that can either be calculated or estimated. Slide 4. Gaussian Probability Distribution The figure on this slide is for a Gaussian distribution of results. Sometimes this is also referred to as the Normal distribution since the probabilities of measurements that represent the true value are distributed equally on both sides of the true value. The x-axis represents the range of possible measured values and the y-axis represents the number of times a measured value will be observed. There are some important features to note about this type of distribution. First, the mean value is exactly in the middle of the distribution. In a true Gaussian distribution the mean and the median, the middle value, will be the same. They will also both have the highest probability of all possible values. In statistical parlance the result that occurs most frequently is called the mode. Second, the equation for this distribution has an exponential function which causes the probability of a result occurring to decrease significantly as one gets farther away from the mean value in both directions. However, the curve never goes to zero. Third, we can divide the curve up into segments that represent certain fractions of the total area under the curve. The parameter sigma, called the standard deviation is the measure of the width of the curve. It can be used to identify different portions of the total area under the entire curve. As shown on the figure if we take the area under the curve that is encompassed by the mean value plus and minus 1.96 sigma 95% of all results will normally fall into this range. So if the mean were 10.0 pCi and sigma was 1.0 pCi, then the range of values that would correspond to 95% of the possible measurements would be from 10-1.96 to 10 + 1.96, or 8.04 to 11.96 pCi. Slide 5. Examples of Probability Distributions The figure on this slide shows another type of probability distribution. This one is described by a linear relationship on each side of the mean value. This type of distribution is what might be assumed for use of a volumetric pipet.

Slide 6. Statistical Terms The terms and their definitions shown on this slide are commonly used in discussions of uncertainty of measurements. One of the important concepts to remember is that when we perform an analysis our measurement is a representation of all the possible measurements that could be made. That is our single measurement ends up representing the entire population. Even if we perform replicate analysis on a sample, it is still only a sample of the entire population. Slide 7. Mean and Standard Deviation The mean of a series of measurements is simply the sum of all the individual values divided by the total number of measurements made. Traditionally this parameter has been represented by an x with a bar above it, or x-bar. It more statistical parlance it is referred to as q-bar. The equation to determine the experimental standard deviation is shown on the slide. The term variance is the square of the standard deviation. Variances of different measurements are able to be added directly whereas the standard deviation is not. Slide 8. Uncertainty vs. the Standard Uncertainty The definition of uncertainty according to the Guide to Uncertainty of Measurements is quoted on this slide. Since every measurement has an associated uncertainty, any result must be reported with either a calculated or estimated uncertainty. ALWAYS! Reporting the uncertainty with the measurement allows the data user to have a certain level of confidence of how closely the value reported represents the true value. For example, suppose you were to get a report for uranium in drinking water of 25 pCi/L. Without the reported uncertainty you have no idea what this value represents with regards to the accuracy of the measurement made. Values should be reported with their uncertainty as a multiple of the standard deviation. When one standard deviation is used this is referred to as the standard uncertainty. It is also called the one-sigma uncertainty. So lets go back to our example and say the value is 25 350 pCi/L where we have reported the one-sigma uncertainty. The meaning of this value is much clearer; this was not a value that can tell us how close to the drinking water limit of 20 pCi/L the sample actually is. In contrast, if the reported result was 25 1.2 pCi/L at one-sigma uncertainty, we could have a good deal of confidence that this sample has exceeded the 20 pCi/L limit. Slide 9. Combined Standard Uncertainty In order to calculate the combined standard uncertainty we need to establish what mathematical model is being used. The output estimate is the result of combining several different measured parameters to yield a final result. Each measured parameter will have an uncertainty associated with it. Lets use an example of a measured number of sample counts and subtract the background to get the net counts. Each measurement of counts has an uncertainty associated with it. The net counts are the output parameter and it is easy to see that sample minus background equals net counts. However how do the uncertainties of each measurement combine?

Slide 10. Combining Uncertainties (Uncertainty Propagation) The method of calculating a combined uncertainty has a formula just as we had a simple formula to calculate the output estimate. The combined standard uncertainty is calculated by using mathematical formulas. This process is called the propagation of uncertainty. We will not discuss how the mathematical formulas are derived, however we will discuss how the formulas are used for many of the radiochemical methods to calculate the combined standard uncertainty. For more details on uncertainty and propagation of uncertainty see Chapter 19 of MARLAP. Slide 11. Components of Uncertainty It turns out that similar mathematical functions will have similar methods of combining uncertainties. Thus the mathematical functions of addition and subtraction have the same formula for combining uncertainties. Multiplication and division are similar to each other but different from addition and subtraction. Therefore, multiplication and division have a separate formula. As shown on this slide when we add or subtract two measurement values, their uncertainties are combined as the square root of the sum of the individual uncertainties squared. This is referred to mathematically as being combined in quadrature. Slide 12. Combining Uncertainties when Output is Based on Addition The example shown here combines two measurement values 15 and 10. If the uncertainty for the measurement 15 is 1.5 and the uncertainty for the measurement of 10 is 2 the final value will be 25 2.5, where the value of 2.5 is one combined standard uncertainty. Slide 13. Relative Combined Standard Uncertainty In a previous slide we identified that the functions of addition and subtraction had the same technique of combining uncertainties. So in the example on the previous slide if our output estimate was based on subtraction rather than addition, the combined standard uncertainty would still have been 2.5. The area on this slide shaded in blue summarizes these two different functions and the associated uncertainty for each. The relative combined uncertainty for each measurement is different. The relative combined uncertainty takes the combined uncertainty and divides it by the final output estimate. If we do this for the two separate functions of addition and subtraction for the examples shown, for the addition function the relative combined standard uncertainty is 10%, but for subtraction it is 50%. Relative uncertainty is a measure of analytical precision. Whenever possible we would like to have a low percentage for this value. The concept of relative combined standard uncertainty shown here identifies how the overall method of computation of the output estimate affects the relative combined standard uncertainty, and emphasizes that each uncertainty measurement can have significant effects on analytical precision.

Slide 14. Combining Uncertainties when Output is Based on Multiplication Given the formula for the output estimate shown for Y, the method of calculating the combined standard uncertainties is shown on the next line. This is the general form for the combined standard uncertainty that will be used in most radiochemical analyses. This format divides the squares of the uncertainties by the squares of their individual output estimates and then combines them in quadrature. Note that the same form is used for both multiplication and division. Slide 15. Factors Contributing to Uncertainty of the Final Result. Radiochemical analytical results have many factors that are used to determine the output estimate. Listed on this slide are some of the input parameters that are used to get the final result. Notice that Published Values for Constants is one of the categories. This means things like half life, or decay abundance factor contribute to uncertainty. One might initially think that these are constants and so they dont have any uncertainty associated with them. Keep in mind however that any constant is determined based on certain measurements, thus there is uncertainty associated with it as well. Keep each of these parameters in mind as we progress through the rest of this module. Slide 16. Typical Uncertainties for Radiochemical Measurements. The tables on this slide and the next one identify the relative standard uncertainty associated with each type of input parameter used in the determination of a radioactivity concentration. Slide 17. Typical Uncertainties for Radiochemical Measurements Most analysts working with environmental samples tacitly assume that all of the uncertainty is due to the count measurement since it is usually so close to background. Two of the most often ignored uncertainties that can contribute appreciably are the measurement of the radiochemical yield and the effect of attenuation (GPC analysis in particular). Slide 18. Counting Uncertainty: Poisson Distribution of Counts Measurements made with traditional analytical instrumentation usually fall into the realm of Gaussian statistics with respect to the uncertainty of the measurement. Radioactive decay, and consequently the number of counts observed, follows a different probability distribution called the Poisson probability distribution. It can be shown that for this distribution the standard uncertainty of the observed counts is simply the square root of those counts. We are again not going to go into the derivation of uncertainty with Poisson Statistics but will identify the means for estimating the uncertainty of an individual count. Simply stated, for an individual count the standard uncertainty is the square root of the number of counts. Slide 19. Poisson Uncertainty of the Counts The determination of radiochemical activity concentration in a sample will always require a sample count and a background count. We start here determining the uncertainty of a single counting event. Looking at the example calculation on the right we observe 169 counts. The estimated uncertainty associated with this single measurement is 13 counts. It is important to

realize that even though we are taking the square root of the counts the final result is still in units of counts. The relative standard uncertainty for this single measurement is 7.7%. Slide 20. Poisson Uncertainty of the Count Rate If you think back to one of the very first lessons where we discussed what radioactivity was, youll recall that radioactive decay (leading to counts) are spontaneous and random. Youll also recall that the longer the observation period is, the greater will be the number of decays that will be observed. As we saw in the previous slide the uncertainty is estimated by the square root of the counts. Putting all these concepts together we can estimate the count rate uncertainty using the equations identified on this slide. Remember that the count rate R, is equal to the number of counts N divided by the time. For the example shown if we record 169 counts in 100 minutes we get a count rate of 1.69 cpm with a standard uncertainty of 0.13 cpm Slide 21. Test yourself Exercise 1 In the example shown here the count rate calculated is the same for each measurement. However which of these will yield a lower relative uncertainty? Slide 22. Test yourself Exercise 1: Solution For the two results we get uncertainties of 1 cps for the longer counting interval versus 10 cps for the shorter count interval. Thus the longer count interval provides a lower relative standard uncertainty even though the actual count rates are the same. Slide 23. Test Yourself Exercise 2 Whenever we make a measurement of radioactivity we must take into account the instrument background. Thus the net counts are the sample gross counts minus the instrument background counts. Here the concept of quadrature will be used to estimate the standard uncertainty for the net count rate. The sample count rate is 100.0 2.0 cps and the instrument background is 10 1.5 cps. What is the result with its combined standard uncertainty? Slide 24. Test Yourself Exercise 2: Solution The result is the net sample count rate with its combined standard uncertainty is 90 2.5 counts per second. Always report your analytical result with a stated uncertainty. Slide 25. Safe Drinking Water Act Required Detection Limits The Safe drinking Water Act also requires that if a sample does not have detectable activity, that a lower detection limit must be achieved for that sample. It also has a lower value that must be achieved for each sample. This lower value is the Required Detection Limit. How do we know if we can achieve these values? Slide 26. Safe Drinking Water Act Definition of Detection Limit The verbiage that describes how the detection limit is determined is excerpted here from 40CFR141.25. This definition is based only on the uncertainty of counting and nothing else. However an equation that shows us how to calculate this is not provided in the CFR.

Slide 27. More on Safe Drinking Water Act Detection Limit The words in the CFR tells us that the 95% confidence level is used to determine how large the uncertainty can be with regards to the actual measured value. As stated here, if a detection limit is defined as 3 pCi/l, its associated uncertainty at that concentration must be 3 pCi/L or less, at the 95% confidence level. This means that 1.96 times the standard uncertainty would be equal to 3.0 pCi/L. Slide 28. How to Calculate if the Required Detection Limit has Been Achieved The two equations that are used to perform the calculation to determine if the detection limit has been achieved are shown here. The actual equation to be used is shown on the next slide. Slide 29. The Safe Drinking Water Act Detection Limit Equation Although this equation appears formidable, it really uses parameters that we set or measure. This equation should be used to determine if each sample meets the specific detection limit set forth in 40 CFR 141.25. Slide 30. Example Safe Drinking Water Act Detection Limit Calculation Shown here are example parameters that are used to calculate the sample activity. You should review the data in this slide and then use the equations from the previous slides to verify the net sample activity concentration plus its standard uncertainty and also that the required detection limit has been achieved. Slide 31. Combined Standard Uncertainty The combined standard uncertainty for determining an activity concentration in a sample uses an equation that has all four mathematical functions in it and as such will require the formula shown here. Slide 32. Calculating Gross Alpha Results in Radiochemical Determinations The equation used for determining the gross alpha activity in a sample is shown here. We are going to use the general formula for uncertainty and perform a combined uncertainty calculation for this analysis. Slide 33. Test yourself Exercise 3: Calculating the Gross Alpha Results. We have provided here some basic information for determining the gross alpha activity in a sample. The uncertainties for some of the measurements have been provided. The uncertainty for the counts and the combined standard uncertainty are to be calculated by you. Usually the uncertainty associated with the counting interval can be considered as an insignificant contributor to the combined uncertainty. Slide 34. Test Yourself Exercise 3: Solution for Activity Calculation The first part of the calculation is to determine the net sample activity. This is needed to calculate the combined standard uncertainty. Show yourself that you can get this value of 87.44 pCi/L for the gross alpha concentration.

Slide 35. Test yourself Exercise 3: Solution for the Standard Uncertainty Using the general formula for the combined standard uncertainty previously shown, we will have three components that we will use: the count rate uncertainty, the efficiency uncertainty and the sample volume measurement uncertainty. In order to calculate the uncertainty of the count rate, we need to calculate the count rate itself, and its associated uncertainty. The net count rate is 3.3 cpm per the data provided. Since we dont need the count rate uncertainty all by itself, but the variance, we have shown here the value for the square of the count rate uncertainty as 0.037 cpm. Taking these values into the equation for the relative combined uncertainty we then add in the relative uncertainties for the efficiency and the volume measurement. Note that in this instance the counting uncertainty accounts for only about 25% of the total uncertainty and the efficiency is the largest contributor to the uncertainty. Slide 36. Critical Level Concentration If you recall the shape of the Gaussian distribution, the ends were asymptotic to the x-axis. What that meant was that as you get farther away from the mean value you get less and less likely to make a measurement that is representative of that distribution of values. When we make radioactivity measurements and expect zero as the result of that measurement it also means that there is the probability of getting high values that could be considered true activity. We need to determine at what point above the mean concentration do we say that we have true activity in the sample that is not zero. What is the confidence level that we want to achieve to do this? This level of confidence is also referred to as the tolerable error rate. Typically a value of 5% is chosen. The result of this process sets our limits for a Type I error. A type I error is one where we state that the sample has positive, i.e., non-zero activity, when in fact it has zero activity. The point that we set for the Type I error is referred to as the Critical Level. Slide 37. The Critical level Concentration The graph, on the right side of this slide, shows a Gaussian Distribution with the mean value at zero. The critical level value is set so that 95% of the time an analytical value will be considered part of the background. The critical value is equal to zero plus a constant times the standard uncertainty of the distribution. The constant for a 5% tolerable error rate is equal to 1.645. Note that this value is different from the 1.96 value used for a 95% confidence interval because the Type I Error rate uses only one side of the Gaussian distribution. Slide 38. Critical Level: Type I Tolerable Error Rate In order to calculate where the critical level is, we need to calculate the standard uncertainty for the analysis. Keep in mind that if we are using this to determine the critical level for a sample, two measurements are required; one for the sample and one for the instrument background. The determination of the sigma zero value using a single blank count measurement can be best approximated by the term under the radical sign in the second equation. When the sample and blank count times are the same, this equation simplifies to the equation shown on the right. For a detailed description of this derivation go to MARLAP Chapter 20 section 20.2.

Slide 39. Critical Level Concentration Equation for Gross Alpha or Gross Beta The factors to be used in the determination of the gross alpha or gross beta concentration are listed on the slide. The critical level concentration then becomes the value determined by the equation shown when an error rate of 5% has been selected for a Type I error. The critical level so determined is now sample specific (a posteriori: after the measurement). Slide 40. Example of a Critical Level Concentration for Gross Beta Analysis The data for an individual sample is shown in the blue area of this slide. The formula from the previous slide is applied to calculate the sample specific critical level. Thus using this single count for background and the sample specific information, if the calculated value for activity exceeds 1.48 pCi/L the sample has detectable activity at a 5% error rate. Slide 41. Minimum Detectable Concentration The term minimum detectable concentration, or MDC, is a different concept from the critical level. The critical level deals with only a Type I error at a small acceptable error rate. The MDC determines the chances for the sample both being greater than the critical level for the blank and less than the critical level for the blank. Thus the MDC concept includes both a Type I and Type II errors. MDC calculations are usually a priori (that is a calculation made before the measurement), and based on nominal values for the parameters used for counting the sample. An MDC value is theoretical and not sample specific. Therefore the MDC should not be used for the purposes of radionuclide detectability. Slide 42. Minimum Detectable Signal and Concentration Another way of stating the MDC is shown on this slide. Note that the Type I error rate is 95% and the Type II error rate is 5%. The MDC looks at the detectability issue in a different format than the critical level. Slide 43. Minimum Detectable Signal The two curves on this slide show the distributions for the radionuclide free sample or the blank, and for the sample that contains radioactivity at the minimum detectable concentration. Remember that the critical level concentration is based on the blank measurement and an assumed, small Type I error rate. The MDC is trying to determine simultaneously that there is a small probability that the sample count belongs the background distribution for no radioactivitycurve on the left and that there is a large probability that the sample contains detectable radioactivity-the curve on the right. The two curves for the results of the infinite sample distribution cross at the critical level for the radionuclide free sample distribution. The concentration which represents the MDC is the mean of the sample distribution that is possible thus 50% of all samples that are at the MDC will actually be less than the MDC. This does not mean that there is no detectable activity!

Slide 44. Minimum Detectable Signal Equation In order to adequately address how to determine an MDC we need to first look at the previous two graphs and assess how to get to the MDC value on the curve to the right. The signal response is the x-axis. If we use the critical level signal as the starting point we need only to add the Type II error rate times the standard uncertainty of the true mean to get the signal response that corresponds to the MDC. Slide 45. Simplifying the Minimum Detectable Signal Equation When we make the assumptions used for the MDC calculation and apply it to the minimum detectable signal, the values for the tolerable error rates for 95% and 5% are the same, 1.645. The equation from the previous slide can be simplified to the equation shown here. Keep in mind that the value for sigma zero for a single background measurement is the same as shown on Slide 38. For the exact development of these equations refer to Chapter 20 of MARLAP. Slide 46. Example of an MDC Calculation The data shown on this slide are the same as those we used for the SDWA calculation of the required detection limit. The definitions of these detection limits are very different, thus we should expect different results. The final value of 4.06 pCi/L obtained using the MDC calculation is greater than the value of 2.4 pCi/L obtained using the SDWA calculation. When different equations for detection capability are used different results will be obtained. In the case of the SDWA, EPA defines how detectability is determined. That is a legal reporting requirement, and no other method should be used. Slide 47. Conclusion We have introduced a great number of statistical concepts in this module that relate to uncertainty and measurements. This is an introductory module to these topics and the student should review other texts and literature, such as MARLAP, to get a more thorough understanding of these topics as well as more advanced concepts. Review the objectives listed here and review this module to ensure that you have a basic understanding of each of these objectives.

10