Vous êtes sur la page 1sur 10

584

trends in analytical chemistry, vol. 18, nos. 9+10, 1999

References
[ 1 ] BIPM, IEC, IFCC, ISO, IUPAC, IUPAP, OIML, International Vocabulary of Basic and General Terms in Metrology, VIM. ISO, Geneva, 1993. [ 2 ] P. De Bievre, Accred. Qual. Assur. 2 ( 1997 ) 269. [ 3 ] BIPM, IEC, IFCC, ISO, IUPAC, IUPAP, OIML, Guide to the Expression of Uncertainty in Measurement. ISO, Geneva, 1993. [ 4 ] EURACHEM, Quantifying Uncertainty in Analytical Measurements. EURACHEM Secretariat, P.O. Box 46, Teddington, Middlesex TW11 0LY, UK, 1995. [ 5 ] A. Williams, Accred. Qual. Assur. 1 ( 1996 ) 14. [ 6 ] S. Ellison, W. Wegscheider, A. Williams, Anal. Chem. 1 ( 1997 ) 607A. [ 7 ] F.E. Satterthwaite, Psychometrika 6 ( 1941 ) 309. [ 8 ] C.F. Dietrich, Uncertainty, Calibration and Probability. The Statistic of Scientic and Industrial Measurement, 2spacend Ed. Adam Hilger, Bristol, 1991. [ 9 ] W. Horwitz, J. AOAC Int. 81 ( 1998 ) 785. [ 10 ] W. Horwitz, R. Albert, Analyst 122 ( 1997 ) 615.

[ 11 ] Analytical Methods Committee, Analyst 120 ( 1995 ) 2303. [ 12 ] W. Horwitz, Anal. Chem. 54 ( 1982 ) 67A. [ 13 ] International Organization for Standardization, ISO 5725. Accuracy (Trueness and Precision ) of Measurement Methods and Results. ISO, Geneva, 1994. [ 14 ] D.L. Massart, J. Smeyers-Verbeke, Robust Process Analytical Methods for Industrial Practice and Their Validation. Report Project SMT4-CT95-2031, 1996. [ 15 ] A. Maroto, J. Riu, R. Boque, F.X. Rius, Anal. Chim. Acta ( in press ). Alicia Maroto is a Ph.D. student at the Department of Analytical and Organic Chemistry of the Rovira i Virgili University of Tarragona. Ricard Boque and Jordi Riu are respectively Assistant Professor and Associate Professor at the Department of Analytical and Organic Chemistry of the Rovira i Virgili University of Tarragona. F. Xavier Rius is Professor of Analytical Chemistry at the Department of Analytical and Organic Chemistry of the Rovira i Virgili University of Tarragona.

Some practical examples of method validation in the analytical laboratory


Piet van Zoonen*, Ronald Hoogerbrugge, Steven M. Gort, Henk J. van de Wiel, Henk A. van 't Klooster
National Institute of Public Health and the Environment (RIVM ), P.O. Box 1, 3720 BA Bilthoven, The Netherlands Method validation is a key element in both the elaboration of reference methods and the assessment of a laboratory's competence in producing reliable analytical data. Hence, the scope of the term method validation is wide, especially if one bears in mind that there is or at least should be a close relation between validation, calibration and quality control QA /QC. Moreover, validation should include more than the instrumental step only since the whole cycle from sampling to the nal analytical result is important in the assessment of the validity of an analytical result. In this article validation is put in the context of the process of producing chemical information. Two cases are presented in more detail: the
*Corresponding author.
0165-9936/99/$ ^ see front matter PII: S 0 1 6 5 - 9 9 3 6 ( 9 9 ) 0 0 1 5 9 - 4

development of a European standard for chlorophenols and its validation by a full scale collaborative trial, and the intralaboratory validation of a method for ethylenethiourea using alternative analytical techniques. z1999 Elsevier Science B.V. All rights reserved.
Keywords: Method validation; Quality control; Collaborative trial; Calibration

1. Introduction
Quality is a relative notion: never `high' nor 'low' in an absolute sense, but rather adequate or inadequate in terms of the extent to which a product, process or service meets the requirements specied beforehand by an objective or a customer.
1999 Elsevier Science B.V. All rights reserved.

trends in analytical chemistry, vol. 18, nos. 9+10, 1999

585

The principal product of an analytical chemical laboratory is information about the chemical composition of material systems, usually in terms of the identity and / or quantity of one or more relevant components in samples taken from these materials. The quality of scientic information in general is evaluated by internationally accepted standards of objectivity, integrity, reproducibility and traceability, in any case prior to publication. Essential criteria for the quality of produced chemical information are the utility and reliability, which are closely related to the margins of uncertainty in the measurement results, regarding both the identity and the concentration of the target components. With respect to these correlated criteria, minimum requirements are generally set by the customer and usually deduced from a previously specied purpose. The quality of produced chemical information is therefore factually to be acknowledged by the customer as the end user of this information. For chemical measurements, this could be a clinical chemist who needs to know the identity of certain isolated compounds from a biological uid, or a polymer chemist who wishes to verify the molecular structure of a synthesis product, or a health researcher who wants to know whether the concentration of a certain toxic compound in a certain food is above a certain concentration level. It is not hard to imagine the consequences ^ in terms of costs, health risks, etc. ^ when, on closer examination or statistical evaluation of the measurement results, a `positive' turns out to be false, or the uncertainly margin of a measured concentration appears to be 100% and not the 10% initially reported. Evaluation and validation of analytical methods and laboratory procedures are therefore of paramount importance, prominent means being the use of adequate ( preferably certied ) reference materials and participation in interlaboratory prociency tests. Quality demands made upon the infrastructure, equipment, operating procedures, personnel and organisation of the laboratory are to be deduced from the quality requirements that the produced chemical information should meet. A formal recognition of this type of quality can be achieved through accreditation or certication, based on international quality standards and guidelines, as issued by ISO, OECD and CEN. Validation of analytical methods is but one though an essential step in the integral process of quality assurance and quality control of chemical measurements in material systems [ 1,2 ].

Fig. 1. Chemical analysis as a cyclic process.

2. Chemical analysis as an integral process


Chemical analysis of whatever material system can be described as a chain of decisions, actions and procedures [ 3 ]. The cyclic nature of many chemical analysis processes is schematically depicted in Fig. 1. The last step ( interpretation and evaluation of the results of the analysis ) should eventually provide an answer to the starting problem, generally stated by e.g. a scientic problem or a legal context. If the answer is not satisfactory, the analysis cycle can be followed again, after a change or adaptation of one or more steps. Sometimes this leads to the development of a new method or ( part of a ) procedure, for example in order to achieve a better separation of certain components, or to attain a lower detection limit for specic compounds. Like any chain, a chemical analysis chain is only as strong as its weakest link. In general, the weakest links in an analytical process are not the ones usually recognised as parts of a chemical analysis, such as chromatographic separation or spectrometric detection, but rather the preceding steps, often taking place outside the analytical laboratory: the selection of the object( s ) to be sampled, design of the sampling plan, subsampling, sampling techniques and transportation and storage of samples. When the analytical laboratory is not responsible for the sampling, the quality management system

586

trends in analytical chemistry, vol. 18, nos. 9+10, 1999

often does not even account for these weak links in the analytical process! Further, if elements such as preparation of the laboratory sample, extraction or clean-up of the samples have not carefully been carried out, even the most advanced and quality controlled analytical instruments and sophisticated computer techniques cannot prevent that the analytical results are questionable. Finally, when the interpretation and evaluation of the analytical results have no statistical basis, it is not clear how signicant these are, which greatly determines their utility. Therefore, in our opinion, quality control and quality assurance should involve all the steps of the chemical analysis as an integral process, of which validation of the analytical methods used is only one, though important, step. In laboratory practice, quality criteria should be concerned with the rationale of the sampling plan, the validation of methods, instruments and laboratory procedures, the reliability of identications, the accuracy and precision of measured concentrations and the comparability of laboratory results with relevant information produced earlier or elsewhere.

Approximations and assumptions incorporated in the measurement procedure Random variation.


However, one must realise that for a particular problem one should try to assess the relative inuence of the alternative sources on the nal outcome, e.g. if the dominating source of uncertainty of the procedure has a RSD of 25% it is not very efcient to demand a 6 0.1% accuracy in measuring or weighing the initial sample volume. As stipulated by Horwitz and Albert [ 6 ], the among-laboratories variability is the dominating error component in the world of practical ultratrace analysis. They conclude that a single laboratory cannot determine its own error structure, except in the context of certied reference materials or consensus results from other laboratories. Again, the importance of the use of reference materials is underlined, since these provide information on the combined effect of many of the potential sources of uncertainty. In the literature as well as in laboratory practice, quantication of uncertainty in qualitative analysis ( identication, molecular structure elucidation ) is scarcely addressed. Yet, there are possibilities, for example when using computer-aided library search of molecular spectra for identication of organic compounds. If an adequate similarity index is being used, such as Cleij's reproducibility-based system for different kinds of molecular spectra [ 7 ], it is possible to specify a quantitative threshold of uncertainty, like condence intervals are used in quantitative analysis. In a next version of the EURACHEM syllabus, quantication of uncertainty will also refer to qualitative analysis.

3. Quality, information and uncertainty


According to Shannon [ 4 ], gaining information is reducing the amount of uncertainty, as provided by the results of measurements. The following uncertainty sources in quantitative analysis have been identied by EURACHEM [ 5 ]:

Incomplete denition of the measurand Sampling extraction/preconcentration Incomplete and interferences Matrix effects during sampling or sample prepara Contamination tion instruments Personal bias in reading analoguemeasurement of Lack of awareness/imperfect effects of environmental conditions on the meas

4. Validation characteristics / performance criteria


The key criteria for evaluation of an analytical method are the following [ 8^10 ].

urement procedure Uncertainty of weights and volumetric equipment Instrument resolution or discrimination threshold Values assigned to measurement standards and reference materials Values of constants and other parameters obtained from external sources, used in the data reduction algorithm

selectivity/specicity limit of detection limit of quantication recovery range and linearity working accuracy/trueness reproducibility) precision (repeatability, ruggedness/robustness.

trends in analytical chemistry, vol. 18, nos. 9+10, 1999

587

Whether or not these criteria apply to validation of a particular method depends on the purpose and the nature of the method. In its guidelines the Dutch Board for accreditation distinguishes between `new', `standard', or `modied' methods in order to characterise the nature of a method; a standard method in this denition is a fully validated ( inter )national standard, while on the other hand methods are classied according to their purpose, e.g. qualitative / quantitative or trace-analytical vs major components. The validation requirements for such methods are summarised in Fig. 2 [ 11 ].

5. Current approaches to validation


In a recent review [ 12 ] current approaches to validation of analytical chemical methods are discussed. Some shortcomings of existing validation schemes are identied, such as insufcient coverage of variability in space or time, and mismatches between validation

Fig. 2. Validation requirements as demanded by the Dutch Board of Accreditation.

criteria and intended use of the method, for example the use in regulatory control. An attempt is made to link validation concepts used in different elds, such as measurement uncertainty, and prediction error. A general statistical modelling approach for combining different aspects of validation is recommended, and illustrated with an example. This type of modelling should be the basis for the development of new statistically underpinned validation schemes which integrate current validation and quality assurance activities. It is stated that validation includes the initial assessment of performance characteristics, several types of interlaboratory testing, and quality control. Validation is thus concerned with assuring that a measurement process produces valid measurements; this has also been called measurement assurance [ 13 ]. The concept validation of a method of analysis may be used in ( at least ) three senses: in the narrow and traditional sense the term denotes validation of a chemical method as described in a standard operating procedure (SOP ). In a wider sense validation may be concerned with a method of analysis ( e.g. in an ( inter )national standard ) which explicitly leaves freedom to adapt the procedure to the infrastructure in a specic situation. In this case there are more SOPs, all in conformity with the `master method'. Finally, in a still wider and perhaps unconventional sense, validation of analytical methods may be considered from the perspective of those who use analytical results for other purposes. The method of analysis for end users of analytical results amounts to the specication of an analytical result ( e.g. `clenbuterol in liver'), with the implied `analysed by any reasonable method'. Accordingly, a specic method of analysis in the analytical chemical sense can be considered as just one realisation of the class of all methods currently applied to measure component X in matrix Y. In principle, each modication of the protocol invalidates an existing validation according to ISO 5725. Much work on validation has been performed in joint efforts of IUPAC, ISO and AOAC International ( see [ 14 ] for its early history ). Results appear as a series of harmonised protocols. The second edition of standard ISO 5725 [ 15 ] has much in common with the IUPAC / ISO / AOAC Protocol for design, conduct and interpretation of collaborative studies [ 16 ]. Important contributions to some of the problems mentioned above were made in two other protocols [ 17,18 ], on prociency testing (PT ) and on internal quality control ( IQC ), respectively. The international harmonised protocol for the prociency testing of ( chemical ) analytical laboratories [ 17 ] considers

588

trends in analytical chemistry, vol. 18, nos. 9+10, 1999

laboratory-performance studies, in which each laboratory uses its own analytical method as opposed to the method-performance studies of ISO 5725 ( nomenclature according to [ 19 ]). Although the purpose of prociency testing is often primarily the evaluation or improvement of laboratory performance, it is reasonable to consider it also as a method-performance validation with our wider denition of method ( e.g. from the perspective of a customer interested in `clenbuterol in liver' measurements ). This would solve problems of different laboratories having different SOPs, and of SOPs changing every now and then in each laboratory. There is a prescribed repetition in prociency testing schemes with frequencies of once every 2 weeks to once every 4 months being considered reasonable. This solves the problem with the static nature of ISO 5725 validation, and, by varying the test materials, also the problem of not assessing matrix variability. Despite all the advantages, prociency testing according to the IUPAC guidelines [ 17 ] cannot be considered a complete validation methodology on its own. First of all, it does not provide for SOP-specic validation. More importantly, the scheme requires repeated interlaboratory studies, which severely restricts the amount and variety of samples that can be analysed. Therefore, prociency testing is an extensive validation methodology. Finally, in the current protocol the only performance characteristic considered is laboratory bias, most often in the form of a zscore. This information alone may be insufcient to judge tness for purpose. It has been shown that an effective measurement assurance requires validation at different scales. Newly developed or implemented methods are usually rst validated using in-house validation. This type of validation should be supplemented by ongoing internal quality control validation in each laboratory, and by participation of the laboratory in interlaboratory schemes. Considering the complex nature of many modern methods of analysis, prociency testing schemes, allowing laboratory-specic SOPs, are more to the point than method-evaluating schemes like ISO 5725. Currently, there is insufcient linkage between the three validation schemes in-house validation, internal quality control and prociency testing. The model presented in this article, together with the concepts of measurement uncertainty and tness-forpurpose, provides a basis for the development of integrated validation approaches. During a recent Joint FAO / IAEA Expert Consultation on validation of analytical methods for food control the `ideal validated method' was dened as fol-

lows [ 9 ]: ``The ideal validated method is one that has progressed fully through a collaborative study in accordance with internationally harmonised protocols for the design, conduct and interpretation of method performance studies. This usually requires a study design involving a minimum of 5 test materials, the participation of 8 laboratories reporting valid data, and most often includes blind replicates or split levels to assess within-laboratory repeatability parameters.'' Limiting factors for completing ideal multi-laboratory validation studies include high costs, lack of sufcient expert laboratories available and willing to participate in such studies, and overall time constraints.

6. Validation by using alternative analytical methods


The validation strategies described above all assume that methods are applied on a routine basis in various laboratories. In a research environment a rather unique method might be developed and validated for the use in only one or a few studies. Then of course one would like to establish the same amount of information on the validity of the method. However, some of the usual tools, like participation in prociency schemes and the use of reference materials are probably not possible. A number of practical examples, i.e. the screening and analysis of polar pesticides in environmental monitoring programmes by coupled-column liquid chromatography and gas chromatography^mass spectrometry, are discussed by Hogendoorn et al. [ 20 ]. One of the examples is a study on the levels of ethylenethiourea ( ETU ) in groundwater. Below the main aspects of the validation are described, further details can be found in [ 20 ]. In the validation of both methods the calibration procedure is very important and provides the information for several of the criteria. The calibration is based on spiked samples which are comparable to the samples of groundwater to be analysed which gives in itself, by comparison with the analysis of standards, the recovery data. The calibration samples are ordered in time around and between the real samples. Therefore, the possible inuence of the conditions during the analysis ( robustness ) of the samples is automatically captured in the calibration sequence. To determine the working range of both methods, the calibration data are evaluated by the spreadsheet application Calwer [ 21 ]. The result is shown in Fig. 3. The application enables an extensive evaluation of the

trends in analytical chemistry, vol. 18, nos. 9+10, 1999

589

Fig. 3. Calibration curve for ETU analysed by HPLC^UV. Several calibration models are statistically tested and a model for the variance is applied.

calibration curve for example with respect to the appropriate calibration model. The results are presented to the user both by number, as results of statistical tests, and by graphics, to enable a graphical interpretation. These results show for the ETU / HPLC^UV calibration that the most simple calibration model ( y = bx ) is to be preferred since the more complex models ( y = a+bx or y = a+bx+cx2 ) show neither signicant nor relevant improvements of the residual standard deviation. Calwer also offers the opportunity to test the concentration dependence of the residuals. In most calibration software ordinary least squares regression is used to calculate the calibration curve. This approach is based on the homoscedasticity of the measurements. Then the deviations at low concentrations should show deviations from a reasonable calibration model which are equal to the deviations at the high limit of the working range. For ETU this assumption is clearly violated; therefore a suitable model for the residuals is used in combination

with weighted least squares regression. In addition to the graphical presentation, also the log likelihood is calculated, which enables an objective selection criterion between several variance models. The weighted regression has been applied in our laboratory for about 5 years showing that the assumption of equal variance over the working range is nearly always severely violated. Application of a reasonable variance model implicitly gives the standard deviation at low concentrations and therefore estimates the limit of detection. The strategy of variance models, weighted regression in combination with the maximum likelihood criterion is formalised for method comparison calibration /validation in ISO 13752 [ 22 ]. The selectivity of the method is checked by comparing the shape and the retention of real samples and calibration samples with the chromatogram of standard solutions. For the analysis of ETU in ground water neither reference materials nor prociency testing schemes

590

trends in analytical chemistry, vol. 18, nos. 9+10, 1999

Fig. 4. Comparison of levels of ETU found by the two analytical methods. At the very low end of the concentration range deviations between the results of both methods seriously exceed the expectation based on the two validatons.

are available. A reasonable accuracy is assured by a number of precautions:

standard solutions are made by two independent routes solid reference material is checked by IR spectroscopy for impurities study with a comparison this analysis) one other laboratory (familiar with all 60 groundwater samples are analysed by two independent methods, HPLV^UV and GC^MS.
The results are shown in Fig. 4. The analyses of all samples by the two analyses methods offers the opportunity to compare the variance between real samples with the expected variance based on the validation results of both methods with the same type of regression. For intermediate and relatively high concentrations of ETU in groundwater ( s 1 Wg / l ) the variance between both methods complies with its expectation. However, for very low concentrations ( 6 1 Wg / l ) the variance between the results of both methods is much larger than expected. This deviation might be an indication that the natural spread in interfering materials in the groundwater is not com-

pletely covered by the samples used for the calibration / validation experiments. These interferences apparently only have a substantial effect on the measurement uncertainty at low levels ( 6 1 Wg / l ) of the analyte. This additional source of uncertainty might disturb the assumption that the method with the best validation characteristics should be preferred as the reference method ( the x in the regression equation y = a+bx ). Without precautions the additional source of variance is treated by usual regression schemes as due to the y-method yielding estimates for the slope of the calibration relation ( b ) which are too low. In method comparison this implies that the ratio of the y-method results with respect to the x-method results is underestimated. An alternative approach to compare analytical methods explores the `known' variances of both methods [ 23,24 ]. However these results will also be disturbed by the additional matrix variance because it will be shared according to the ration of these `known' variances. Recently Carrol et al. [ 25 ] described the statistical implications of additional sources of variance in regression analysis, suggesting a rather simple correction ( method of moments ) to obtain a reasonable estimate of the ratio between the two methods.

trends in analytical chemistry, vol. 18, nos. 9+10, 1999

591

Fig. 5. Relative standard deviations of reproducibility as a function of the concentration for all sample / component combinations. For comparison also the Horwitz curve is plotted.

6.1. Validation sheet

In our laboratory, analytical procedures are always validated. Key elements of the method, references to relevant documentation ( study plan, SOPs, les of raw data, reference materials, ( interlaboratory ) conrmations ) and an abstract of the validation results ( see above ) are summarised in a validation sheet, that again gives access to the original raw data.

Results were received from 24 laboratories from 8 European countries. In the total data set about 7% of the data were detected as statistical outliers and were subsequently rejected. The number of eliminated outliers is comparable with ``experience in the United States with environmental analytes indicating that about 9% of the data represents out-of-control performance'' [ 29 ]. For the remaining data set the relative standard deviations for repeatability varied between 5% and 25% and reproducibility between 26% and 56%. Both these ranges complied with the general variation in interlaboratory studies as found by Horwitz [ 30 ]. The comparison of the reproducibility with the Horwitz relation is shown in Fig. 5. The recovery of the spikes found by the participants generally varied between 60 and 140% ( Fig. 6 ). The data set was also evaluated for differences in results due to the degrees of freedom in the standard method. The largest difference was found between participants which perform internal versus external calibration. The latter method obtained results that were on average up to 50% lower than the results of the participants applying internal standards. Supported by this result the use of an internal standard will be mandatory in the EN 12673. Another degree of freedom in the standard method that should be evaluated was the use of a mass spectrometer (MS ) instead of an electron capture detector

7. Interlaboratory method validation


Another approach to validation is by interlaboratory tests. As an example, the validation of a GC determination of chlorophenols in water is summarised below; details can be found in [ 26,27 ]. In order to validate the ( preliminary ) CEN method PrEN 12673 `Water quality ^ Gas chromatographic determination of some selected chlorophenols in water' [ 28 ] an interlaboratory study was organised. The intercomparison study comprised a total of 9 samples; a high-level, a low-level and a blank sample of three types of water: drinking water, surface water and waste water. The levels of the spikes were based on the water quality objectives. The variation in the samples due to homogeneity and stability of the components was tested extensively and appeared to be negligible in comparison with variation between the results of the participants.

Fig. 6. Recovery of the additions, calculated as the ratio between the indicator value and the general mean, as a function of the concentration of the indicator value.

592

trends in analytical chemistry, vol. 18, nos. 9+10, 1999

( ECD ). The MS results varied, among the components, between minus 25% and plus 20% compared to the ECD results. The results of the interlaboratory study were compiled into one large data set to enable a multivariate evaluation. Since only alternative selections of the chlorophenols were added to each of the samples, this data set contained a lot of missing values ( s 50%). The usual calculation techniques for PCA can not deal with such a data set, therefore a least squares procedure was implemented to estimate the PCA-like results iteratively. The scores were calculated as the projection of the part of the data that was actually measured. The result showed a prominent rst principal component that describes about 40% of total variance with all loadings on the same side. This indicated that a major multicomponent systematic source of variation was present at a number of laboratories. This implies that the elimination of that source of variance, which might be relatively simple, will have a major inuence on the comparability of EN 12673 measurements. Both the study of the systematic differences ( like the differences between GC^ECD and GC^MS results ) and the result of the PCA that the application of standard multivariate statistical tools easily obtains essential information from interlaboratory data sets which might otherwise be overlooked.

on the quality of the data produced by the other laboratories in the ring test.
8.3. Control charts

Validation studies often demonstrate the performance of an analytical method before its routine application. The validity to the routine measurements of the assessed performance can be controlled by the repeated analyses of control samples. The results are monitored in a control chart with warning and action limits. Application of a stable control sample also provides necessary information for the interpretation of long-term trend studies.
8.4. Double-blind replicates

8. Other useful elements for an in-house validation procedure


8.1. Reference materials

With respect to prociency control, reference materials and control samples have limited value due to their high price ( only for reference materials ) and since the operators might recognise them. The possible measurement errors in known concentrations are of course not fully representative to the possible measurement errors in unknown concentrations. To avoid this limitation the laboratory can ask the sampling organisation to provide a number of unknown ( with independent sample codes ) replicates. After nalising the measurements the codes of the replicates are revealed and these data are quite representative for the precision in the other samples.

9. Conclusion
Important trends in method validation are the development of alternatives for the conventional collaborative trial, which has its limitations due to the large amount of work to be done to establish a standard method. The proliferation of laboratory accreditation has prompted the need for practical in-house validation procedures, which in turn may prove their value in method evaluation. Also the more intensive use of QA /QC schemes in recent years is a valuable source of performance data.

If relevant reference materials are available they are a powerful tool in the assessment and control of the accuracy of the performance of the applied analytical method. In practical application in the laboratory one should, however, realise that additional control experiments often are necessary since reference materials usually do not reect the complete range of application of the method with respect to concentration, matrix effects and possible inhomogeneity of samples.
8.2. Prociency testing schemes

The data from prociency testing schemes are helpful in assessing the performance of a method in the laboratory. In most instances it has the same practical limitations as the use of reference materials and, additionally, the usefulness of the results strongly depends

References
[ 1 ] J.M. Christensen, J. Kristiansen, A.M. Hansen, J.L. Nielsen, Method validation: an essential tool in total quality management, in: M. Parkany ( Editor ), Quality Assurance and Total Quality Management for Analytical Laborato-

trends in analytical chemistry, vol. 18, nos. 9+10, 1999

593

ries, The Royal Society for Chemistry, Cambridge, 1995, p. 46. [ 2 ] I.R. Juniper, Method validation: an essential element in quality assurance, in: M. Parkany ( Editor ), Quality Assurance and Total Quality Management for Analytical Laboratories, The Royal Society for Chemistry, Cambridge, 1995, p. 114. [ 3 ] H.A. van 't Klooster, Chemical analysis as a cyclic process ( A chain is only as strong as its weakest link ), in: G. Angeletti, A. Bjrseth ( Editors ), Organic Micropollutants in the Aquatic Environment, Kluwer Academic, Dordrecht, 1991. [ 4 ] C.E. Shannon, A mathematical theory of communication, J. Bell Syst. Tech. 27 ( 1948 ). [ 5 ] A. Williams, W. Wegscheider ( Editors ), Quantifying Uncertainty in Analytical Measurement, EURACHEM Report, LGC, Teddington, 1995. [ 6 ] W. Horwitz, R. Albert, Reliability of the determinations of polychlorinated contaminants ( biphenyls, dioxins, furans ), J. AOAC Int. 79 ( 1996 ) 589^621. [ 7 ] H.A. van 't Klooster, P. Cleij, H.J. Luinge, G.J. Kleywegt, Computer-aided spectroscopic structure analysis of organic molecules using library search and articial intelligence, in: W.A. Warr, ( Editor ), Chemical Structures: The International Language of Chemistry, Springer Verlag, Berlin, 1988, p. 219. [ 8 ] H. Holcombe ( Editor ), The Fitness for Purpose of Analytical Methods. A Laboratory Guide to Method Validation and Related Topics, EURACHEM Guide, LGC, Teddington, 1998. [ 9 ] FAO Food and Nutrition Paper 68, Validation of Analytical Methods for Food Control, Report of a Joint FAO / IAEA Expert Consultation, FAO /UN, Rome, 1998. [ 10 ] A.R.C. Hill, S.L. Reynolds, Guidelines for validation of analytical methods for monitoring trace organic components in food stuffs and similar materials, Analyst ( submitted ). [ 11 ] RVA-SC08, Aanvullende criteria voor methode-validatie, Raad voor Accreditatie ( Dutch Board for Accreditation ), 1996 ( in Dutch ). [ 12 ] H. van der Voet, J.A. van Rhijn, H.J. van de Wiel Interlaboratory and time aspects of effective validation. Submitted. [ 13 ] M.B. Carey, Measurement assurance: role of statistics and support from international statistical standards, Int. Stat. Rev. 61 ( 1993 ) 27. [ 14 ] W. Horwitz, History of the IUPAC / ISO / AOAC harmonization program, J. AOAC Int. 75 ( 1992 ) 368. [ 15 ] ISO 5725, Accuracy (Trueness and Precision ) of Measurement Methods and Results ^ Parts 1^4, 6, International Organization for Standardization, Geneva, 1994.

[ 16 ] IUPAC, Protocol for the design, conduct and interpretation of collaborative studies, Pure Appl. Chem. 60 ( 1988 ) 855. [ 17 ] IUPAC, The international harmonized protocol for the prociency testing of ( chemical ) analytical laboratories, Pure Appl. Chem. 65 ( 1993 ) 2123. [ 18 ] IUPAC, Harmonized guidelines for internal quality control in analytical chemistry laboratories, Pure Appl. Chem. 67 ( 1995 ) 649. [ 19 ] IUPAC, Nomenclature of interlaboratory analytical studies, Pure Appl. Chem. 66 ( 1994 ) 1903. [ 20 ] E.A. Hogendoorn, R. Hoogerbrugge, R.A. Baumann, H.D. Meiring, A.P.J.M. de Jong, P. van Zoonen, Screening and analysis of polar pesticides in environmental monitoring programmes by coupled-column liquid chromatography and gas chromatography-mass spectrometry, J. Chromatogr. A 754 ( 1996 ) 49. [ 21 ] S.M. Gort, R. Hoogerbrugge, A user-friendly spreadsheet program for calibration using weighted regression, J. Chemometr. Intell. Lab. Syst. 28 ( 1995 ) 193. [ 22 ] ISO 13752, Air Quality ^ Assesment of Uncertainty of a Measurement Method under Field Conditions using a Second Method as Reference, International Organization for Standardization, Geneva, 1998. [ 23 ] A. Madansky, The tting of straight lines when both methods are subject to error, J. Am. Stat. Assoc. 54 ( 1959 ) 173. [ 24 ] J. Riu, F.X. Rius, Assessing the accuracy of analytical methods using linear regression with errors in both axes, Anal. Chem. 68 ( 1996 ) 1851. [ 25 ] R.J. Carrol, D. Ruppert, The use and misuse of orthogonal regression in linear errors in variables models, Am. Stat. 50 ( 1996 ) 1. [ 26 ] R. Hoogerbrugge, M.R. Ramlal, G.H. Stil, S.M. Gort, H.A.G. Heusinkveld, E.G. van der Velde, P. van Zoonen, Interlaboratory Validation of PrEN 12673: Water Quality ^ Gas Chromatographic Determination of Some Selected Chlorophenols in Water, RIVM Report 219101006, 1997. [ 27 ] R. Hoogerbrugge, S.M. Gort, E.G. van der Velde, P. van Zoonen, Multi- and univariate interpretation of the interlaboratory validation of PrEN 12673; GC determination of chlorophenols in water, Anal. Chim. Acta 338 ( 1999 ) 119. [ 28 ] EN 12673, GC Determination of Chlorophenols in Water, 1998. [ 29 ] W. Horwitz, personal communication. [ 30 ] W. Horwitz, Evaluation of analytical methods used for regulation of foods and drugs, Anal. Chem. 54 ( 1982 ) 67A.

Manuscripts on disk
TrAC prefers submission of machine-readable manuscripts. The general instructions for style, arrangement and reference format as given in the Instructions to Authors should be followed. Illustrations and tables should be provided in the usual manner. For more details contact the Editorial Ofce of TrAC: Tel. (+31 20 ) 485-2777; Fax (+31 20 ) 485-2845, or see the TrAC homepage at: www.elsevier.nl / locate / trac.

Vous aimerez peut-être aussi