Académique Documents
Professionnel Documents
Culture Documents
com
by Deborah Lema
I love carpet. I love science. …I love NASA. But when I see that the
gun at PTL isn’t the one in the press releases, and I see NASA say
in press releases that this XRF testing measures the water in
carpets (it doesn’t), and I see a VP of marketing from a major
carpet mill saying in yet another release that this XRF testing sees
cleaning residue too (it doesn’t), it is obvious that even the people
most involved with the CRI SOA program are confused or
misinformed or something.
6. The designer dirt isn’t worked into the test samples like real soil
would be, or on a real carpet, or over a real cushion etc. Something
that didn’t do particularly well in the real world might do great in
the testing. Or the other way around! Nobody knows. There’s
evidence both ways.
7. When particles get wet they may clump together and stay that
way after they dry. XRF would see these clumps as less dirt than if
they were scattered. This may mess things up. Nobody knows.
8. Sometimes the energy coming off the designer soils mess with
each other and the XRF gets wonky signals. When this happens
the counts are off. One can correct this problem with math, but in a
hurry, CRI chose not to correct it. If it had been corrected, the
results would be that much closer to being more accurate.
9. XRF gets wonky signals from moisture. Because of this they are
supposed to wait until cleaned samples are completely dry before
scanning them again. But they don’t check for moisture; they just
wait overnight (except for Friday tests I suppose), which PTL’s
data shows is sometimes not sufficient. Some samples may still be
damp when scanned, which might mess things up and make
something look better or worse (probably better) than something
else. Nobody knows.
12. The angle between the this XRF gun and the carpet yarns can’t
be changed; otherwise the readings get more messed up. But
cleaning moves the yarns, so that would mess things up. Then they
rake the yarns after cleaning as if this puts them all back exactly
where they were. Does raking put the yarns back exactly where
they were before cleaning? Don't think so, but nobody knows. Is
averaging out these additional errors enough? Don’t know, because
all the errors haven’t been calculated.
13. Even if they COULD put all the yarns back where they started
before cleaning, some of the leftover soils themselves would have
been moved during the cleaning process, messing things up
differently.
14. Where the soil is on the fibers matters to the results. Soil that
is closer to the gun gets counted more than soil farther away. For
instance, when you look up at an airplane in the sky, you don’t
think that it’s really an inch long, even if it looks that way. But
XRF thinks it’s actually an inch long. This is a problem, maybe the
largest problem in the testing. If you can wash things down the
fibers so the soil is farther away from the gun, and do no extraction
at all, you could still look like you extracted a lot.
16. Nobody knows if the results from PTL would be the same as
results from another lab. If another lab could repeat the same
results, then that would show the testing at least is precise outside
of PTL. (This would not mean, however, that the results would be
right either way.) When they say that their results are precise, it
sounds like they mean that their results are good, but what
precision really means is:
The results are close together, but not very close to the goal.
Compare that with accuracy:
The results are not as close to each other, but they are all closer to
the goal.
Ideally the results from a test would be close to the bull's eye every
time, having both accuracy and precision.
17. The test has not been validated. Nobody knows if it really
works. The results need to be compared to results of other testing
methods that have already been established and accepted as
scientifically sound and relevant, like the ASTM vacuum testing
standard for starters. Validation methods do not include
spectrophotometry or weighing samples—those tests are what
we’ve been trying to replace because they don’t work well enough!
“Validating” against such lame methods is like saying, "She weighs
as much as a duck so she’s a witch."
18. After studying and fixing everything that may need to be fixed,
properly calculating bias and precision like everybody else has to,
the testing should be shown to be relevant. Meaning, does the top-
ranking extractor or system in the testing work best in the field on
real, walked-on carpets? Does the testing actually answer the
question of what to use to clean all carpets with?
19. This program also needs a thorough and true independent peer
review. Maybe even an open public peer review?