Vous êtes sur la page 1sur 7

www.CarpetDiemBlog.

com
by Deborah Lema

I love carpet. I love science. …I love NASA. But when I see that the
gun at PTL isn’t the one in the press releases, and I see NASA say
in press releases that this XRF testing measures the water in
carpets (it doesn’t), and I see a VP of marketing from a major
carpet mill saying in yet another release that this XRF testing sees
cleaning residue too (it doesn’t), it is obvious that even the people
most involved with the CRI SOA program are confused or
misinformed or something.

Below is a casual summary of topics and questions pulled from the


discussion paper regarding the XRF method used for the SOA
program (paper link at right). I put this summary together for
those who were requesting a quick reference synopsis and
“translation” of the paper, and also to reiterate that these
questions still stand.

Casual Summary of Points

PTL – Professional Testing Laboratory

CRI – Carpet and Rug Institute

XRF – X-ray fluorescence


1. This kind of XRF prefers a flat, even surface to look at. Without
a flat, even surface, errors are not predictable and can’t be
corrected for. Carpet is not a flat, even surface.

2. This kind of XRF prefers to be right up on something it’s looking


at, since air absorbs the energy needed for seeing things. The XRF
in PTL’s set-up is up over the carpet and is not right up on it. Plus,
obviously, there are airy spaces between the fibers and yarns.

3. XRF needs to be properly calibrated for what it’s looking for.


Calibration is like when you set your clock -- you calibrate it by
looking at the time on another clock. There are clocks you can trust
to be right, and there are clocks you’re not sure of.

Another example... if one wants to check the lead content in a


plastic toy with this kind of portable XRF gun, among other things
one first has to measure the lead content of an appropriate sample
containing a certified amount of lead in it. That way the gun
software can learn what that amount of lead looks like. After it
knows that, it can look at a toy, compare the number it gets from it
to what it knows from the certified reference, and come up with a
number. But if you don’t really know how much is in your reference
and your sample is tricky, you can’t compare back and hope to
come up with a useful number.

4. The carpet in this testing is moving on a conveyor; there are


questions about what this does to accuracy.
5. The designer soil used is correlated somewhat to vacuumed dirt.
But since extractors in the real world clean the stuff that is left
behind after vacuuming, using designer soil based on vacuumed
dirt to test extractors etc. is not particularly relevant. But a
relevance test hasn’t been done.

6. The designer dirt isn’t worked into the test samples like real soil
would be, or on a real carpet, or over a real cushion etc. Something
that didn’t do particularly well in the real world might do great in
the testing. Or the other way around! Nobody knows. There’s
evidence both ways.

7. When particles get wet they may clump together and stay that
way after they dry. XRF would see these clumps as less dirt than if
they were scattered. This may mess things up. Nobody knows.

8. Sometimes the energy coming off the designer soils mess with
each other and the XRF gets wonky signals. When this happens
the counts are off. One can correct this problem with math, but in a
hurry, CRI chose not to correct it. If it had been corrected, the
results would be that much closer to being more accurate.

9. XRF gets wonky signals from moisture. Because of this they are
supposed to wait until cleaned samples are completely dry before
scanning them again. But they don’t check for moisture; they just
wait overnight (except for Friday tests I suppose), which PTL’s
data shows is sometimes not sufficient. Some samples may still be
damp when scanned, which might mess things up and make
something look better or worse (probably better) than something
else. Nobody knows.

10. PTL's XRF model increases in temperature fairly rapidly while


it’s on. It has been shown elsewhere that this temperature increase
can make readings unrepeatable. If readings are unrepeatable,
who knows which reading is right? Has this effect been looked at?
Don't know.

11. The time between measurements also may have an effect—it


has been shown elsewhere with the same gun that readings of the
same spot taken at different times are very different from each
other. This shouldn’t be. The reading taken at 9 a.m. should match
a reading taken at 9:30 a.m. How can you know which reading is
right, if any? Has this been studied? Don't know.

12. The angle between the this XRF gun and the carpet yarns can’t
be changed; otherwise the readings get more messed up. But
cleaning moves the yarns, so that would mess things up. Then they
rake the yarns after cleaning as if this puts them all back exactly
where they were. Does raking put the yarns back exactly where
they were before cleaning? Don't think so, but nobody knows. Is
averaging out these additional errors enough? Don’t know, because
all the errors haven’t been calculated.
13. Even if they COULD put all the yarns back where they started
before cleaning, some of the leftover soils themselves would have
been moved during the cleaning process, messing things up
differently.

14. Where the soil is on the fibers matters to the results. Soil that
is closer to the gun gets counted more than soil farther away. For
instance, when you look up at an airplane in the sky, you don’t
think that it’s really an inch long, even if it looks that way. But
XRF thinks it’s actually an inch long. This is a problem, maybe the
largest problem in the testing. If you can wash things down the
fibers so the soil is farther away from the gun, and do no extraction
at all, you could still look like you extracted a lot.

15. There needs to be enough soil to detect. For example, if you


have your music cranked up, you can’t figure out what I’m saying if
I whisper. Same with this XRF set-up… it can’t tell what the
carpet is “saying” when it’s too quiet (if something cleaned well). So
the exact results are a guess.

16. Nobody knows if the results from PTL would be the same as
results from another lab. If another lab could repeat the same
results, then that would show the testing at least is precise outside
of PTL. (This would not mean, however, that the results would be
right either way.) When they say that their results are precise, it
sounds like they mean that their results are good, but what
precision really means is:
The results are close together, but not very close to the goal.
Compare that with accuracy:

The results are not as close to each other, but they are all closer to
the goal.

Ideally the results from a test would be close to the bull's eye every
time, having both accuracy and precision.

17. The test has not been validated. Nobody knows if it really
works. The results need to be compared to results of other testing
methods that have already been established and accepted as
scientifically sound and relevant, like the ASTM vacuum testing
standard for starters. Validation methods do not include
spectrophotometry or weighing samples—those tests are what
we’ve been trying to replace because they don’t work well enough!
“Validating” against such lame methods is like saying, "She weighs
as much as a duck so she’s a witch."

18. After studying and fixing everything that may need to be fixed,
properly calculating bias and precision like everybody else has to,
the testing should be shown to be relevant. Meaning, does the top-
ranking extractor or system in the testing work best in the field on
real, walked-on carpets? Does the testing actually answer the
question of what to use to clean all carpets with?

19. This program also needs a thorough and true independent peer
review. Maybe even an open public peer review?

20. If one part of an industry is going to mandate the business of


another part of an industry, it seems that the least they could do is
make sure the science is as solid as it can be first.

Vous aimerez peut-être aussi