Académique Documents
Professionnel Documents
Culture Documents
p(x)
f(x,y)
z
x
z
y
x
D.L. Bailey
J.L. Humm
A. Todd-Pokropek
A. van Aswegen
Technical Editors
The following states are Members of the international atomic energy agency:
afGhaNisTaN
albaNia
alGeria
aNGola
arGeNTiNa
arMeNia
ausTralia
ausTria
aZerbaiJaN
bahaMas
bahraiN
baNGladesh
belarus
belGiuM
beliZe
beNiN
boliVia
bosNia aNd herZeGoViNa
boTsWaNa
braZil
bruNei darussalaM
bulGaria
burkiNa faso
buruNdi
caMbodia
caMerooN
caNada
ceNTral africaN
rePublic
chad
chile
chiNa
coloMbia
coNGo
cosTa rica
cTe diVoire
croaTia
cuba
cyPrus
cZech rePublic
deMocraTic rePublic
of The coNGo
deNMark
doMiNica
doMiNicaN rePublic
ecuador
eGyPT
el salVador
eriTrea
esToNia
eThioPia
fiJi
fiNlaNd
fraNce
GaboN
GeorGia
GerMaNy
GhaNa
Greece
GuaTeMala
haiTi
holy see
hoNduras
huNGary
icelaNd
iNdia
iNdoNesia
iraN, islaMic rePublic of
iraQ
irelaNd
israel
iTaly
JaMaica
JaPaN
JordaN
kaZakhsTaN
keNya
korea, rePublic of
kuWaiT
kyrGyZsTaN
lao PeoPles deMocraTic
rePublic
laTVia
lebaNoN
lesoTho
liberia
libya
liechTeNsTeiN
liThuaNia
luXeMbourG
MadaGascar
MalaWi
Malaysia
Mali
MalTa
Marshall islaNds
MauriTaNia, islaMic
rePublic of
MauriTius
MeXico
MoNaco
MoNGolia
MoNTeNeGro
Morocco
MoZaMbiQue
MyaNMar
NaMibia
NePal
NeTherlaNds
NeW ZealaNd
NicaraGua
NiGer
NiGeria
NorWay
oMaN
PakisTaN
Palau
PaNaMa
PaPua NeW GuiNea
ParaGuay
Peru
PhiliPPiNes
PolaNd
PorTuGal
QaTar
rePublic of MoldoVa
roMaNia
russiaN federaTioN
rWaNda
saN MariNo
saudi arabia
seNeGal
serbia
seychelles
sierra leoNe
siNGaPore
sloVakia
sloVeNia
souTh africa
sPaiN
sri laNka
sudaN
sWaZilaNd
sWedeN
sWiTZerlaNd
syriaN arab rePublic
TaJikisTaN
ThailaNd
The forMer yuGoslaV
rePublic of MacedoNia
ToGo
TriNidad aNd TobaGo
TuNisia
Turkey
uGaNda
ukraiNe
uNiTed arab eMiraTes
uNiTed kiNGdoM of
GreaT briTaiN aNd
NorTherN irelaNd
uNiTed rePublic
of TaNZaNia
uNiTed sTaTes of aMerica
uruGuay
uZbekisTaN
VeNeZuela, boliVariaN
rePublic of
VieT NaM
yeMeN
ZaMbia
ZiMbabWe
The agencys statute was approved on 23 october 1956 by the conference on the statute of the
iaea held at united Nations headquarters, New york; it entered into force on 29 July 1957. The
headquarters of the agency are situated in Vienna. its principal objective is to accelerate and enlarge the
contribution of atomic energy to peace, health and prosperity throughout the world.
COPYRIGHT NOTICE
All IAEA scientific and technical publications are protected by the terms of
the Universal Copyright Convention as adopted in 1952 (Berne) and as revised
in 1972 (Paris). The copyright has since been extended by the World Intellectual
Property Organization (Geneva) to include electronic and virtual intellectual
property. Permission to use whole or parts of texts contained in IAEA publications
in printed or electronic form must be obtained and is usually subject to royalty
agreements. Proposals for non-commercial reproductions and translations are
welcomed and considered on a case-by-case basis. Enquiries should be addressed
to the IAEA Publishing Section at:
Marketing and Sales Unit, Publishing Section
International Atomic Energy Agency
Vienna International Centre
PO Box 100
1400 Vienna, Austria
fax: +43 1 2600 29302
tel.: +43 1 2600 22417
email: sales.publications@iaea.org
http://www.iaea.org/books
IAEA, 2014
Printed by the IAEA in Austria
December 2014
STI/PUB/1617
FOREWORD
Nuclear medicine is the use of radionuclides in medicine for diagnosis,
staging of disease, therapy and monitoring the response of a disease process.
It is also a powerful translational tool in the basic sciences, such as biology, in
drug discovery and in pre-clinical medicine. Developments in nuclear medicine
are driven by advances in this multidisciplinary science that includes physics,
chemistry, computing, mathematics, pharmacology and biology.
This handbook comprehensively covers the physics of nuclear medicine.
It is intended for undergraduate and postgraduate students of medical physics.
It will also serve as a resource for interested readers from other disciplines, for
example, clinicians, radiochemists and medical technologists who would like to
familiarize themselves with the basic concepts and practice of nuclear medicine
physics.
The scope of the book is intentionally broad. Physics is a vital aspect of
nearly every area of nuclear medicine, including imaging instrumentation,
image processing and reconstruction, data analysis, radionuclide production,
radionuclide therapy, radiopharmacy, radiation protection and biology. The
authors were drawn from a variety of regions and were selected because of their
knowledge, teaching experience and scientific acumen.
This book was written to address an urgent need for a comprehensive,
contemporary text on the physics of nuclear medicine. It complements similar
texts in radiation oncology physics and diagnostic radiology physics that have
been published by the IAEA.
Endorsement of this handbook has been granted by the following
international professional bodies: the American Association of Physicists in
Medicine (AAPM), the AsiaOceania Federation of Organizations for Medical
Physics (AFOMP), the Australasian College of Physical Scientists and Engineers
in Medicine (ACPSEM), the European Federation of Organisations for Medical
Physics (EFOMP), the Federation of African Medical Physics Organisations
(FAMPO), and the World Federation of Nuclear Medicine and Biology
(WFNMB).
The following international experts are gratefully acknowledged
for making major contributions to this handbook as technical editors:
D.L. Bailey (Australia), J.L. Humm (United States of America), A. Todd-Pokropek
(United Kingdom) and A. van Aswegen (South Africa). The IAEA officers
responsible for this publication were S. Palm and G.L. Poli of the Division of
Human Health.
EDITORIAL NOTE
Although great care has been taken to maintain the accuracy of information contained
in this publication, neither the IAEA nor its Member States assume any responsibility for
consequences which may arise from its use.
The use of particular designations of countries or territories does not imply any
judgement by the publisher, the IAEA, as to the legal status of such countries or territories, of
their authorities and institutions or of the delimitation of their boundaries.
The mention of names of specific companies or products (whether or not indicated as
registered) does not imply any intention to infringe proprietary rights, nor should it be construed
as an endorsement or recommendation on the part of the IAEA.
The IAEA has no responsibility for the persistence or accuracy of URLs for external or
third party Internet web sites referred to in this book and does not guarantee that any content
on such web sites is, or will remain, accurate or appropriate.
Preface
Nuclear medicine is the study and utilization of radioactive compounds in
medicine to image and treat human disease. It relies on the tracer principle first
espoused by Georg Karl von Hevesy in the early 1920s. The tracer principle is
the study of the fate of compounds in vivo usingminute amounts of radioactive
tracers which do not elicit any pharmacological response by the body to the tracer.
Today, the same principle is used to study many aspects of physiology, such as
cellular metabolism, DNA (deoxyribonucleic acid) proliferation, blood flow in
organs, organ function, receptor expression and abnormal physiology, externally
using sensitive imaging devices. Larger amounts of radionuclides are also applied
to treat patients with radionuclide therapy, especially in disseminated diseases
such as advanced metastatic cancer, as this form of therapy has the ability to
target abnormal cells to treat the disease anywhere in the body.
Nuclear medicine relies on function. For this reason, it is referred to as
functional imaging. Rather than just imaging a portion of the body believed
to have some abnormality, as is done with X ray imaging in radiology, nuclear
medicine scans often depict the whole body distribution of the radioactive
compound often acquired as a sequence of images over time showing the
temporal course of the radiotracer in the body.
There are two main types of radiation of interest for imaging in nuclear
medicine: ray emission from excited nuclei, and annihilation (or coincidence)
radiation () arising after positron emission from proton-rich nuclei. Gamma
photons are detected with a gamma camera as either planar (2D) images or
tomographically in 3D using single photon emission computed tomography.
The annihilation photons from positron emission are detected using a positron
emission tomography (PET) camera. The most recent major development in this
field is the combination of gamma cameras or PET cameras with high resolution
structural imaging devices, either X ray computed tomography (CT) scanners
or, increasingly, magnetic resonance imaging (MRI) scanners, in a single image
device. The combined PET/CT (or PET/MRI) scanner represents one of the most
sophisticated and powerful ways to visualize normal and altered physiology in
the body.
It is in this complex environment that the medical physicist, along with
nuclear medicine physicians and technologists/radiographers, plays a significant
role in the multidisciplinary team needed for medical diagnosis. The physicist is
responsible for such areas as instrumentation performance, radiation dosimetry
for treatment of patients, radiation protection of staff and accuracy of the data
analysis. The physicist draws on training in radiation and nuclear science,
in addition to scientific rigour and attention to detail in experiments and
measurements, to join forces with the other members of the multidisciplinary
team in delivering optimal health care. Patients are frequently treated on the
basis of the result of the scans they receive and these, therefore, have to be of the
highest quality.
This handbook was conceived and written by physicists, and is intended
primarily for physicists, although interested readers from medical, paramedical
and other science and engineering backgrounds could find it useful. The level
of understanding of the material covered will be different depending on the
background of the reader. Readers are encouraged to visit the IAEA Human
Health web site (http://www-naweb.iaea.org/NAHU/index.html) to discover the
wealth of resources available.
The technical editors and authors, selected for their experience and in
recognition of their contributions to the field, were drawn from around the world
and, thus, this book represents a truly international collaboration. The technical
editors travelled to the IAEA headquarters in Vienna on four occasions over three
years to bring this project to fruition. We would like to thank all of the authors for
their important contribution.
D.L. Bailey, J.L. Humm
A. Todd-Pokropek, A. van Aswegen
Contents
CHAPTER 1. BASIC PHYSICS FOR NUCLEAR MEDICINE . . . . . . . . . 1
1.1.
1.2.
1.3.
1.4.
1.5.
INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1. Fundamental physical constants . . . . . . . . . . . . . . . . . . . 1
1.1.2. Physical quantities and units. . . . . . . . . . . . . . . . . . . . . . 2
1.1.3. Classification of radiation. . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.4. Classification of ionizing radiation. . . . . . . . . . . . . . . . . 4
1.1.5. Classification of indirectly ionizing photon radiation. . . 5
1.1.6. Characteristic X rays. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.7. Bremsstrahlung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.8. Gamma rays. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.9. Annihilation quanta . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.10. Radiation quantities and units. . . . . . . . . . . . . . . . . . . . . 7
BASIC DEFINITIONS FOR ATOMIC STRUCTURE. . . . . . . . 8
1.2.1. Rutherford model of the atom. . . . . . . . . . . . . . . . . . . . . 10
1.2.2. Bohr model of the hydrogen atom. . . . . . . . . . . . . . . . . . 10
BASIC DEFINITIONS FOR NUCLEAR STRUCTURE. . . . . . 10
1.3.1. Nuclear radius. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.2. Nuclear binding energy. . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.3. Nuclear fusion and fission. . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.4. Two-particle collisions and nuclear reactions. . . . . . . . . 14
RADIOACTIVITY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4.1. Decay of radioactive parent into a stable or unstable
daughter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4.2. Radioactive series decay . . . . . . . . . . . . . . . . . . . . . . . . 19
1.4.3. Equilibrium in parentdaughter activities. . . . . . . . . . . . 21
1.4.4. Production of radionuclides (nuclear activation) . . . . . . 22
1.4.5. Modes of radioactive decay . . . . . . . . . . . . . . . . . . . . . . 23
1.4.6. Alpha decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.4.7. Beta minus decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4.8. Beta plus decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4.9. Electron capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.4.10. Gamma decay and internal conversion . . . . . . . . . . . . . 27
1.4.11. Characteristic (fluorescence) X rays and
Augerelectrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
ELECTRON INTERACTIONS WITH MATTER. . . . . . . . . . . . 29
1.5.1. Electronorbital interactions. . . . . . . . . . . . . . . . . . . . . . 29
1.5.2. Electronnucleus interactions. . . . . . . . . . . . . . . . . . . . . 29
1.6.
2.5.
2.6.
INTRODUCTION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
RADIATION EFFECTS AND TIMESCALES. . . . . . . . . . . . . . 49
BIOLOGICAL PROPERTIES OF IONIZING RADIATION . . 51
2.3.1. Types of ionizing radiation. . . . . . . . . . . . . . . . . . . . . . . 51
MOLECULAR EFFECTS OF RADIATION AND THEIR
MODIFIERS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.4.1. Role of oxygen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.4.2. Bystander effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
DNA DAMAGE AND REPAIR. . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.5.1. DNA damage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.5.2. DNA repair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
CELLULAR EFFECTS OF RADIATION . . . . . . . . . . . . . . . . . 56
2.6.1. Concept of cell death. . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.6.2. Cell survival curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.6.3. Dose deposition characteristics: linear energy transfer. . 57
2.6.4. Determination of relative biological effectiveness. . . . . 58
2.6.5. The dose rate effect and the concept of repeat
treatments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.6.6. The basic linearquadratic model. . . . . . . . . . . . . . . . . . 63
2.6.7. Modification to the linearquadratic model for
radionuclide therapies. . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.6.8. Quantitative intercomparison of different treatment
types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.6.9. Cellular recovery processes . . . . . . . . . . . . . . . . . . . . . . 65
2.6.10. Consequence of radionuclide heterogeneity. . . . . . . . . . 66
2.7.
2.8.
3.3.
3.4.
INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
BASIC PRINCIPLES OF RADIATION PROTECTION . . . . . . 74
3.2.1. The International Commission on Radiological
Protection system of radiological protection. . . . . . . . . . 74
3.2.2. Safety standards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.2.3. Radiation protection quantities and units . . . . . . . . . . . . 77
IMPLEMENTATION OF RADIATION PROTECTION IN
A NUCLEAR MEDICINE FACILITY . . . . . . . . . . . . . . . . . . . . 81
3.3.1. General aspects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.3.2. Responsibilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.3.3. Radiation protection programme. . . . . . . . . . . . . . . . . . . 84
3.3.4. Radiation protection committee . . . . . . . . . . . . . . . . . . . 84
3.3.5. Education and training. . . . . . . . . . . . . . . . . . . . . . . . . . . 84
FACILITY DESIGN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.4.1. Location and general layout . . . . . . . . . . . . . . . . . . . . . . 85
3.4.2. General building requirements . . . . . . . . . . . . . . . . . . . . 85
3.4.3. Source security and storage . . . . . . . . . . . . . . . . . . . . . . 86
3.4.4. Structural shielding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.4.5. Classification of workplaces . . . . . . . . . . . . . . . . . . . . . 87
3.4.6. Workplace monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.4.7. Radioactive waste . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.5.
3.6.
3.7.
3.8.
3.9.
OCCUPATIONAL EXPOSURE . . . . . . . . . . . . . . . . . . . . . . . . . 89
3.5.1. Sources of exposure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.5.2. Justification, optimization and dose limitation. . . . . . . . 91
3.5.3. Conditions for pregnant workers and young persons . . . 91
3.5.4. Protective clothing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.5.5. Safe working procedures. . . . . . . . . . . . . . . . . . . . . . . . . 92
3.5.6. Personal monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.5.7. Monitoring of the workplace. . . . . . . . . . . . . . . . . . . . . . 95
3.5.8. Health surveillance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.5.9. Local rules and supervision. . . . . . . . . . . . . . . . . . . . . . . 96
PUBLIC EXPOSURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.6.1. Justification, optimization and dose limitation . . . . . . . . 97
3.6.2. Design considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.6.3. Exposure from patients. . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.6.4. Transport of sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
MEDICAL EXPOSURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.7.1. Justification of medical exposure . . . . . . . . . . . . . . . . . . 99
3.7.2. Optimization of protection . . . . . . . . . . . . . . . . . . . . . . . 100
3.7.3. Helping in the care, support or comfort of patients. . . . . 107
3.7.4. Biomedical research . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.7.5. Local rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
POTENTIAL EXPOSURE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.8.1. Safety assessment and accident prevention. . . . . . . . . . . 108
3.8.2. Emergency plans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
3.8.3. Reporting and lessons learned. . . . . . . . . . . . . . . . . . . . . 111
QUALITY ASSURANCE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.9.1. General considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.9.2. Audit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.2.
4.3.
4.4.
4.5.
5.4.
5.5.
5.6.
5.7.
6.2.
6.3.
6.4.
INTRODUCTION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
6.1.1. Radiation detectors complexity and relevance. . . . . . 196
6.1.2. Interaction mechanisms, signal formation and
detector type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
6.1.3. Counting, current, integrating mode. . . . . . . . . . . . . . . . 197
6.1.4. Detector requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . 197
GAS FILLED DETECTORS. . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
6.2.1. Basic principles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
SEMICONDUCTOR DETECTORS. . . . . . . . . . . . . . . . . . . . . . 202
6.3.1. Basic principles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
6.3.2. Semiconductor detectors. . . . . . . . . . . . . . . . . . . . . . . . . 204
SCINTILLATION DETECTORS AND STORAGE
PHOSPHORS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
6.4.1. Basic principles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
6.4.2. Light sensors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
6.4.3. Scintillator materials. . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
7.3.
7.4.
7.5.
7.6.
7.7.
INTRODUCTION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
PRIMARY RADIATION DETECTION PROCESSES. . . . . . . . 215
7.2.1. Scintillation counters. . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
7.2.2. Gas filled detection systems . . . . . . . . . . . . . . . . . . . . . . 216
7.2.3. Semiconductor detectors. . . . . . . . . . . . . . . . . . . . . . . . . 216
IMAGING DETECTORS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
7.3.1. The gamma camera. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
7.3.2. The positron camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
7.3.3. Multiwire proportional chamber based X ray and
ray imagers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
7.3.4. Semiconductor imagers. . . . . . . . . . . . . . . . . . . . . . . . . . 220
7.3.5. The autoradiography imager. . . . . . . . . . . . . . . . . . . . . . 221
SIGNAL AMPLIFICATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
7.4.1. Typical amplifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
7.4.2. Properties of amplifiers. . . . . . . . . . . . . . . . . . . . . . . . . . 224
SIGNAL PROCESSING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
7.5.1. Analogue signal utilization. . . . . . . . . . . . . . . . . . . . . . . 226
7.5.2. Signal digitization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
7.5.3. Production and use of timing information. . . . . . . . . . . . 228
OTHER ELECTRONICS REQUIRED BY IMAGING
SYSTEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
7.6.1. Power supplies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
7.6.2. Uninterruptible power supplies. . . . . . . . . . . . . . . . . . . . 231
7.6.3. Oscilloscopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
SUMMARY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
8.2.
8.3.
8.4.
8.5.
8.6.
8.7.
9.2.
9.3.
9.4.
9.5.
9.6.
9.7.
CHAPTER 1
BASIC PHYSICS FOR NUCLEAR MEDICINE
E.B. PODGORSAK
Department of Medical Physics,
McGill University,
Montreal, Canada
A.L. KESNER
Division of Human Health,
International Atomic Energy Agency,
Vienna
P.S. SONI
Medical Cyclotron Facility,
Board of Radiation and Isotope Technology,
Bhabha Atomic Research Centre,
Mumbai, India
1.1. INTRODUCTION
The technologies used in nuclear medicine for diagnostic imaging have
evolved over the last century, starting with Rntgens discovery of X rays
and Becquerels discovery of natural radioactivity. Each decade has brought
innovation in the form of new equipment, techniques, radiopharmaceuticals,
advances in radionuclide production and, ultimately, better patient care. All
such technologies have been developed and can only be practised safely with
a clear understanding of the behaviour and principles of radiation sources and
radiation detection. These central concepts of basic radiation physics and nuclear
physics are described in this chapter and should provide the requisite knowledge
for a more in depth understanding of the modern nuclear medicine technology
discussed in subsequent chapters.
1.1.1. Fundamental physical constants
The chapter begins with a short list of physical constants of importance to
general physics as well as to nuclear and radiation physics. The data listed below
were taken from the CODATA set of values issued in 2006 and are available
1
CHAPTER 1
from a web site supported by the National Institute of Science and Technology in
Washington, DC, United States of America: http://physics.nist.gov/cuu/Constants
Avogadros number: NA=6.022 1023 mol1 or 6.022 1023 atoms/mol.
Speed of light in vacuum: c=2.998 108 m/s 3 108 m/s.
Electron charge: e=1.602 1019 C.
Electron and positron rest mass: me=0.511 MeV/c2.
Proton rest mass: mp=938.3 MeV/c2.
Neutron rest mass: mn=939.6 MeV/c2.
Atomic mass unit: u=931.5 MeV/c2.
Plancks constant: h=6.626 1034 J s.
Electric constant (permittivity of vacuum): 0=8.854 1012 C V1 m1.
Magnetic constant (permeability of vacuum): 0=4 107 V s A1 m1.
Newtonian gravitation constant: G=6.672 1011 m3 kg1 s2.
Proton mass/electron mass: mp/me=1836.0.
Specific charge of electron: e/me=1.758 1011 C/kg.
1.1.2. Physical quantities and units
A physical quantity is defined as a quantity that can be used in mathematical
equations of science and technology. It is characterized by its numerical value
(magnitude) and associated unit. The following rules apply to physical quantities
and their units in general:
Symbols for physical quantities are set in italics (sloping type), while
symbols for units are set in roman (upright) type (e.g. m=21 kg;
E=15 MeV; K=220 Gy).
Superscripts and subscripts used with physical quantities are set in italics if
they represent variables, quantities or running numbers; they are in roman
type if they are descriptive (e.g. Nx, m but max, Eab, tr).
Symbols for vector quantities are set in bold italics.
The currently used metric system of units is known as the International
System of Units (SI). The system is founded on base units for seven basic
physical quantities. All other quantities and units are derived from the seven base
quantities and units. The seven base SI quantities and their units are:
(a)
(b)
(c)
(d)
2
Units commonly
used in radiation
physics
Symbol SI unit
Conversion
Length
nm, , fm
Mass
kg
MeV/c2
1 MeV/c2=1.78 1030 kg
Time
ms, s, ns, ps
Current
mA, A, nA, pA
Temperature
Mass density
kg/m3
Current density
A/m2
Velocity
m/s
Acceleration
m/s2
Frequency
Hz
Electric charge
Force
Pressure
Pa
Momentum
Ns
Energy
Power
1 kg/m3=103 g/cm3
1 Hz=1 s1
e
1 e=1.602 1019 C
1 N=1 kg m s2
CHAPTER 1
CHAPTER 1
CHAPTER 1
TABLE
1.2. RADIATION QUANTITIES, UNITS AND CONVERSION
BETWEEN OLD AND SI UNITS
Quantity
Definition
SI unit
10 4 C
kg air
Old unit
Conversion
10 4 C
kg air
Exposure X X =
Q
mair
2.58
Kerma K
K=
E tr
m
1 Gy = 1
J
kg
Dose D
D=
E ab
m
1 Gy = 1
J
kg
Equivalent
dose HT
HT=DwR
1 Sv
1 rem
1 Sv=100 rem
Effective
dose E
E = H Tw T
1 Sv
1 rem
1 Sv=100 rem
Activity A
A=lN
1 Bq = 1 s 1
1 Ci = 3.7 10 10 s 1
1 Bq =
1R=
1 esu
cm 3 airSTP
1 R = 2.58
1 rad = 100
erg
g
1 Gy=100 rad
1 Ci
3.7 10 10
electrons while the nuclear mass M does not. The binding energy of orbital
electrons to the nucleus is ignored in the definition of the atomic mass.
While for 12C the atomic mass is exactly 12 u, for all other atoms ma does
not exactly match the atomic mass number A. However, for all atomic entities,
A (an integer) and ma are very similar to one another and often the same symbol
(A) is used for the designation of both. The mass in grams equal to the average
atomic mass of a chemical element is referred to as the mole (mol) of the
element and contains exactly 6.022 1023 atoms. This number is referred to as
the Avogadro constant NA of entities per mole. The atomic mass number of all
elements is, thus, defined such that A grams of every element contain exactly
NA atoms. For example, the atomic mass of natural cobalt is 58.9332 u. Thus,
one mole of natural cobalt has a mass of 58.9332 g and by definition contains
6.022 1023 entities (cobalt atoms) per mole of cobalt.
The number of atoms Na per mass of an element is given as:
Na NA
(1.1)
=
m
A
Na
N
N
= Z a = Z A (1.2)
V
m
A
Na
N
= Z A (1.3)
m
A
It should be noted that Z/A 0.5 for all elements with one notable exception
of hydrogen for which Z/A=1. Actually, Z/A slowly decreases from 0.5 for low
Z elements to 0.4 for high Z elements. For example, Z/A for 4He is 0.5, for 60Co is
0.45 and for 235U is 0.39.
If it is assumed that the mass of a molecule is equal to the sum of the masses
of the atoms that make up the molecule, then, for any molecular compound, there
are NA molecules per mole of the compound where the mole in grams is defined
as the sum of the atomic mass numbers of the atoms making up the molecule. For
example, 1 mole of water (H2O) is 18 g of water and 1 mole of carbon dioxide
(CO2) is 44 g of carbon dioxide. Thus, 18 g of water or 44 g of carbon dioxide
contain exactly NA molecules (or 3 NA atoms, since each molecule of water and
carbon dioxide contains three atoms).
9
CHAPTER 1
where Z is the atomic number and A the atomic mass number of a given nucleus.
In nuclear physics, the convention is to designate a nucleus X as ZA X , where A is
its atomic mass number and Z its atomic number; for example, the 60Co nucleus
226
is identified as 60
Ra nucleus as 226
27 Co and the
88 Ra. The atomic number Z is
often omitted in references to an atom because the atom is already identified by
its 13 letter symbol. In ion physics, the convention is to designate ions with +
or superscripts. For example, 42 He + stands for a singly ionized helium atom
and 42 He 2+ stands for a doubly ionized helium atom, also known as the particle.
With regard to relative values of atomic number Z and atomic mass number A of
nuclei, the following conventions apply:
An element may be composed of atoms that all have the same number of
protons, i.e. have the same atomic number Z, but have a different number of
neutrons (have different atomic mass numbers A). Such atoms of identical
Z but differing A are called isotopes of a given element.
The term isotope is often misused to designate nuclear species. For
example, 60Co, 137Cs and 226Ra are not isotopes, since they do not belong
to the same element. Rather than isotopes, they should be referred to as
nuclides. On the other hand, it is correct to state that deuterium (with a
nucleus called deuteron) and tritium (with a nucleus called triton) are heavy
isotopes of hydrogen or that 59Co and 60Co are isotopes of cobalt. Thus,
the term radionuclide should be used to designate radioactive species;
however, the term radioisotope is often used for this purpose.
A nuclide is an atomic species characterized by its nuclear composition (A, Z
and the arrangement of nucleons within the nucleus). The term nuclide
refers to all atomic forms of all elements. The term isotope is narrower
and only refers to various atomic forms of a given chemical element.
In addition to being classified into isotopic groups (common atomic
number Z), nuclides are also classified into groups with a common atomic mass
number A (isobars) and a common number of neutrons (isotones). For example,
60
67
Co and 60Ni are isobars with 60 nucleons each (A=60), and 67
31 Ga , 32 Ge and
3
4
67
33 As are isobars with atomic mass number 67, while H (tritium) and He are
isotones with two neutrons each (A Z=2), and 126 C, 137 N and 148 O are isotones
with six neutrons each.
A tool for remembering these definitions is as follows: isotopes have the
same number of protons Z; isotones have the same number of neutrons, A Z;
isobars have the same mass number A.
If a nucleus exists in an excited state for some time, it is said to be in
an isomeric (metastable) state. Isomers are, thus, nuclear species that have a
11
CHAPTER 1
common atomic number Z and a common atomic mass number A. For example,
99m
Tc is an isomeric state of 99Tc and 60mCo is an isomeric state of 60Co.
1.3.1. Nuclear radius
The radius R of a nucleus with atomic mass number A is estimated from the
following expression:
R = R0 3 A
(1.4)
where R0 is the nuclear radius constant equal to 1.25 fm. Since the range of A in
nature is from 1 to about 250, nuclear radius ranges from about 1 fm for a proton
to about 8 fm for heavy nuclei.
1.3.2. Nuclear binding energy
The sum of the masses of the individual components of a nucleus that
contains Z protons and (A Z) neutrons is larger than the actual mass of the
nucleus. This difference in mass is called the mass defect (deficit) m and its
energy equivalent mc2 is called the total binding energy EB of the nucleus. The
total binding energy EB of a nucleus can, thus, be defined as the energy liberated
when Z protons and (A Z) neutrons are brought together to form the nucleus.
The binding energy per nucleon (EB/A) in a nucleus (i.e. the total binding
energy of a nucleus divided by the number of nucleons in the given nucleus)
varies with the number of nucleons A and is of the order of ~8 MeV/nucleon.
A plot of the binding energy per nucleon EB/A in megaelectronvolts per
nucleon against the atomic mass number in the range from 1 to 250 is given in
Fig.1.1 and shows a rapid rise in EB/A at small atomic mass numbers, a broad
maximum of about 8.7 MeV/nucleon around A 60 and a gradual decrease in
EB/A at large A. The larger the binding energy per nucleon (EB/A) of an atom,
the larger is the stability of the atom. Thus, the most stable nuclei in nature are
the ones with A 60 (iron, cobalt, nickel). Nuclei of light elements (small A) are
generally less stable than nuclei with A 60, and the heaviest nuclei (large A) are
also less stable than nuclei with A 60.
12
(small A) are generally less stable than nuclei with A 60, and the heaviest nuclei (large A)
are also less stable than nuclei with A 60.
BASIC PHYSICS FOR NUCLEAR MEDICINE
FIG.1.1. Binding energy per nucleon in megaelectronvolts per nucleon against atomic mass
FIG. 1.1.
Binding energy per nucleon in megaelectronvolts per nucleon against atomic mass
number A. Data are from the National Institute of Science and Technology (NIST).
number A. Data are from the National Institute of Science and Technology (NIST).
1.3.3. Nuclear fusion and fission
1.3.3. Nuclear fusion and fission
The peculiar
EB/A versus
A (Fig.
curve 1.1)
(Fig.1.1)
suggests
two for
The peculiar
shape shape
of theofEBthe
/A versus
A curve
suggests
two methods
methods
for
converting
mass
into
energy:
(i)
fusion
of
nuclei
at
low
A
converting mass into energy: (i) fusion of nuclei at low A and (ii) fission of nuclei at and
large A:
(ii) fission of nuclei at large A:
Fusion of two nuclei of very small mass, e.g. 21 H + 31 H 24 He + n , will
create a more massive nucleus and release a certain amount of energy.
Experiments using controlled nuclear fusion for production of energy have
so far not been successful in generating a net energy gain, i.e. the amount
of energy consumed is still larger than the amount created. However, fusion
remains an active field of research and it is reasonable to expect that in the
future controlled fusion will play an important role in the production of
electrical power.
Fission attained by bombardment of certain elements of large mass (such as
235
U) by thermal neutrons in a nuclear reactor will create two lower mass
and more stable nuclei, and transform some mass into kinetic energy of the
two product nuclei. Hahn, Strassman, Meitner and Frisch described fission
in 1939, and, in 1942, Fermi and colleagues at the University of Chicago
carried out the first controlled chain reaction based on nuclear fission.
13
CHAPTER 1
14
Each Qtwo-particle
collision possesses a characteristic Q value(1.5)
that can
= (m c + m c ) - (m c + m c )
be either positive, zero or negative. For Q > 0, the collision is termed
Each two-particle collision possesses a characteristic Q value that can be either
exothermic
(also called exoergic) and results in a release of energy;
positive, zero or negative. For Q > 0, the collision is termed exothermic (also called
for Q=0,
the
is termed
elastic
Q <is termed
0, theelastic
collision is
exoergic)
andcollision
results in a release
of energy;
for Q = and
0, the for
collision
for Q < 0, the collision is termed endothermic (also called endoergic), and to
termedand
endothermic
(also
called
endoergic),
and
to
take
place,
it requires
take place, it requires an energy transfer from the projectile to the target. An
an energy
transfer
from
to while
the target.
An exothermic
reaction
exothermic
reaction
can the
occurprojectile
spontaneously,
an endothermic
reaction cannot
take place unless the projectile has kinetic energy exceeding the threshold energy
can occur
spontaneously,
while
an
endothermic
reaction
cannot
take
place
(EK)thr given as:
unless the projectile has kinetic energy exceeding the threshold energy
(EK)thr given as:
1
(m 3c 2 + m 4c 2 ) 2 - (m1c 2 + m 2c 2 ) 2
m1
(1.6)
(E ) 2 =
-Q1 +
m 2
c 2 c 2 + m c 2 ) 2
(m 3Kc thr
+ m 4c 2 ) 2 2m(2m
1 + m1 (1.6)
1
2
(E K ) thr =
m 2
2m 2c 2
2
2
2
and
restofenergies
ofm1the
projectile
m1,
wherewhere
m1c2m,1cm2, 2mc22c,2,m
4c theare
m33cc2 and
m4cm
are
rest the
energies
the projectile
, target
m2
products mproducts
target and
m2 reaction
and reaction
m3 and m4, respectively.
3 and m4, respectively.
particle)
of mass mrepresentation
a stationary target
with mass
velocity (incident
1 and velocity 1 striking
2
FIG.1.2.
Schematic
of a two-particle
collision
of ma2 and
projectile
= 0. An intermediate compound entity is formed temporarily that subsequently decays into
and
velocity
striking
a
stationary
target
with
mass
m
and
velocity
particle)twoofreaction
mass m
1
1 m4.
2
products
of mass m3 and
An RADIOACTIVITY
intermediate compound entity is formed temporarily that subsequently decays into
2=0. 1.4.
two reaction products of mass m3 and m4.
15
CHAPTER 1
1.4. RADIOACTIVITY
Radioactivity, also known as radioactive decay, nuclear decay, nuclear
disintegration and nuclear transformation, is a spontaneous process by which
an unstable parent nucleus emits a particle or electromagnetic radiation and
transforms into a more stable daughter nucleus that may or may not be stable.
The unstable daughter nucleus will decay further in a decay series until a stable
nuclear configuration is reached. Radioactive decay is usually accompanied by
emission of energetic particles or ray photons or both.
All radioactive decay processes are governed by the same general
formalism that is based on the definition of the activity A(t) and on a characteristic
parameter for each radioactive decay process, the radioactive decay constant l
with dimensions of reciprocal time, usually in s1. The main characteristics of
radioactive decay are as follows:
The radioactive decay constant multiplied by a time interval that is much
smaller than 1/ represents the probability that any particular atom of a
radioactive substance containing a large number N(t) of identical radioactive
atoms will decay (disintegrate) in that time interval. An assumption is made
that is independent of the physical environment of a given atom.
The activity A(t) of a radioactive substance containing a large number
N(t) of identical radioactive atoms represents the total number of decays
(disintegrations) per unit time and is defined as a product between N(t) and
, i.e.:
A (t ) = N (t ) (1.7)
The SI unit of activity is the becquerel (Bq) given as 1 Bq=1 s1. The
becquerel and hertz both correspond to s1, but hertz refers to the frequency of
periodic motion, while becquerel refers to activity.
The old unit of activity, the curie (Ci), was initially defined as the activity
of 1 g of 226Ra; 1 Ci @ 3.7 1010 s1.
Subsequently, the activity of 1 g of 226Ra was determined to be
3.665 1010 s1; however, the definition of the activity unit curie (Ci) was
kept as 1 Ci=3.7 1010 s1. Since the unit of activity the becquerel is 1 s1,
the SI unit becquerel (Bq) and the old unit curie (Ci) are related as follows:
1 Ci=3.7 1010 Bq and, consequently, 1 Bq=(3.7 1010)1 Ci=2.703 1011 Ci.
16
A N N A
=
=
m
m
A
(1.8)
P
P
D (1.9)
N P (0)
d N P (t )
=
NP
dt (1.11)
17
CHAPTER 1
N P (t )
= P t (1.12)
N P (0)
or
N P (t ) = N P (0)e P t (1.13)
(1.14)
and
A P [t = (T1/2 ) P ] =
1
A (0) = A P (0)e P (T1/2 ) P (1.16)
2 P
From Eqs(1.15) and (1.16), it is noted that e P (T1/2 ) P must equal 1/2,
resulting in the following relationship between the decay constant P and half-life
(T1/2)P:
P =
18
ln 2
0.693
=
(1.17)
(T1/2 ) P (T1/2 ) P
and
1
A P (t = P ) = A P (0) = 0.368 A P (0) = A P (0)e P P (1.19)
e
ln 2
1
=
(T1/2 ) P P
(1.20)
and
P =
(T1/2 ) P
= 1.44(T1/2 ) P (1.21)
ln 2
CHAPTER 1
FIG. 1.3. Activity A (t) plotted against time t for a simple decay of a radioactive parent P into
FIG. 1.3. Activity APP(t) plotted against time t for a simple decay of a radioactive parent P into
a stable or unstable daughter D. The concepts of half-life (T1/2)P and mean life P are also
a stable or unstable daughter D. The concepts of half-life (T1/2)P and mean life P are also
illustrated. The
The area
area under
illustrated.
under the
the exponential
exponential decay
decaycurve
curvefrom
fromtt==0 0totot t==isisequal
equaltotothethe
(0)
where
A
(0)
is
the
initial
activity
of
the
parent
P.
The
slope
of
the
product
A
product APP(0)PP where APP(0) is the initial activity of the parent P. The slope of thetangent
tangenttoto
A
(0)
and
this
tangent
crosses
the
abscissa
axis
at
t=
the
decay
curve
at
t=0
is
equal
to
P
P
at Pt. =
the decay curve at t = 0 is equal to PAP(0) and this tangent crosses the abscissa axis
P.
nuclei dNP(t)/dt. The rate of change of the number of daughter nuclei dND(t)/dt,
however, is more complicated and consists of two components, one being the
1.4.2. Radioactive series decay
supply of new daughter nuclei D through the decay of P given as PNP(t) and
the other
being the
lossPofdecay
daughter
nucleidaughter
D fromD,the
decay of
D to G 1.4.1,
givenisasthe
Radioactive
parent
into stable
discussed
in Section
simplest
known
radioactive
process;
however, the
of a radioactive parent P with
DND(t),
resulting
in thedecay
following
expression
fordecay
dND(t)/dt:
decay constant P into a radioactive (unstable) daughter D which in turn decays with decay
l
l
constant D into a stable or unstable granddaughter G, i.e. ( P
D G ), is much
dN D (t )
P t
more common
and
inDa(tradioactive
decay series
forD (which
the last decay product is
) DN
) = P N P (0)e
t ) (1.22)
= PN
DN
P (tresults
dt
stable.
The parent P in the decay series follows a straightforward radioactive decay described
by
Eq. the
(1.16)
for the
rate of change
of the
number
of parent
dNP(t)/dt.
rate of
assuming
thatnuclei
(i) the
initialThe
number
With
initial
conditions
for time
t=0
change
of the
number
of daughter
nuclei dN
however,
is are
morenocomplicated
and
consists
D(t)/dt,
(0),
and
(ii)
there
daughter
D
nuclei
of parent
nuclei
P is
NP(t=0)=N
P
of two components, one being the supply of new daughter nuclei D through the decay of P
present, i.e. ND(t=0)=0, the solution of the differential equation in Eq.(1.22)
given as PNP(t) and the other being the loss of daughter nuclei D from the decay of D to G
readsasasfollows:
given
DND(t), resulting in the following expression for dND(t)/dt:
P
d N (t )
D
t
= l P N P (tP) - l D eND(Ptt)
= leP
- l D N D (t )
(1.22)
NPD(0)e
N D (t ) =dN
(0)
(1.23)
t P
-l P t
With the initial conditions for time t = 0 assuming that (i) the initial number of parent nuclei P
is NP(t = 0) = NP(0), and (ii) there are no daughter D nuclei present, i.e. ND(t = 0) = 0, the
solution of the differential equation in Eq. (1.22) reads as follows:
20
Recognizing that the activity of the daughter AD(t) is DND(t), the daughter
activity AD(t) is written as:
D P P t
D P t
e D t = A P (0)
e D t
e
e
D P
D P
D
1 P t
e D t = A P (t )
=A P (0)
e
1 e ( D P )t
P
D P
1
D
A D (t ) = N P (0)
(1.24)
where
AD(t) is the activity at time t of the daughter nuclei equal to DND(t);
AP(0) is the initial activity of the parent nuclei present at time t=0;
and AP(t) is the activity at time t of the parent nuclei equal to PNP(t).
While for initial conditions AP(t=0)=AP(0) and AD(t=0)=0, the parent P
activity AP(t) follows the exponential decay law of Eq.(1.14) shown in Fig.1.3,
the daughter D activity AD(t) starts at 0, then initially rises with time t, reaches
a maximum at a characteristic time t=(tmax)D, and then diminishes to reach 0 at
t=. The characteristic time (tmax)D is given as follows:
P
D
(1.25)
(t max )D =
P D
ln
P
A P (t ) D P
1
21
CHAPTER 1
A P (t ) P D
(1.28)
(c) The half-life of the daughter is much shorter than that of the parent:
(T1/2)D (T1/2)P or D P.
For relatively large time t tmax, the activity ratio AD(t)/AP(t) of Eq.(1.28)
simplifies to:
A D (t )
1 (1.29)
A P (t )
The activity of the daughter AD(t) very closely approximates that of its parent
AP(t), i.e. AD(t) AP(t), and they decay together at the rate of the parent. This
special case of transient equilibrium in which the daughter and parent activities
are essentially identical is called secular equilibrium.
1.4.4. Production of radionuclides (nuclear activation)
In 1896, Henri Becquerel discovered natural radioactivity, and in 1934
Frdric Joliot and Irne Curie-Joliot discovered artificial radioactivity. Most
natural radionuclides are produced through one of four radioactive decay chains,
each chain fed by a long lived and heavy parent radionuclide. The vast majority
of currently known radionuclides, however, are human-made and artificially
produced through a process of nuclear activation which uses bombardment of a
22
stable nuclide with a suitable energetic particle or high energy photons to induce
a nuclear transformation. Various particles or electromagnetic radiation generated
by a variety of machines are used for this purpose, most notably neutrons from
nuclear reactors for neutron activation, protons from cyclotrons or synchrotrons
for proton activation, and X rays from high energy linear accelerators for nuclear
photoactivation.
Neutron activation is important in production of radionuclides used
for external beam radiotherapy, brachytherapy, therapeutic nuclear medicine
and nuclear medicine imaging also referred to as molecular imaging; proton
activation is important in production of positron emitters used in positron
emission tomography (PET) imaging; and nuclear photoactivation is important
from a radiation protection point of view when components of high energy
radiotherapy machines become activated during patient treatment and pose a
potential radiation risk to staff using the equipment.
A more in depth discussion of radionuclide production can be found in
Chapter 4.
1.4.5. Modes of radioactive decay
Nucleons are bound together to form the nucleus by the strong nuclear
force that, in comparison to the protonproton Coulomb repulsive force, is at
least two orders of magnitude larger but of extremely short range (only a few
femtometres). To bind the nucleons into a stable nucleus, a delicate equilibrium
between the number of protons and the number of neutrons must exist. For light
(low A) nuclear species, a stable nucleus is formed by an equal number of protons
and neutrons (Z=N). Above the nucleon number A 40, more neutrons than
protons must constitute the nucleus to form a stable configuration in order to
overcome the Coulomb repulsion among the charged protons.
If the optimal equilibrium between protons and neutrons does not exist, the
nucleus is unstable (radioactive) and decays with a specific decay constant into
a more stable configuration that may also be unstable and decay further, forming
a decay chain that eventually ends with a stable nuclide.
Radioactive nuclides, either naturally occurring or artificially produced
by nuclear activation or nuclear reactions, are unstable and strive to reach
more stable nuclear configurations through various processes of spontaneous
radioactive decay that involve transformation to a more stable nuclide and
emission of energetic particles. General aspects of spontaneous radioactive decay
may be discussed using the formalism based on the definitions of activity A and
decay constant without regard for the actual microscopic processes that underlie
the radioactive disintegrations.
23
CHAPTER 1
A closer look at radioactive decay processes shows that they are divided
into six categories, consisting of three main categories of importance to
medical use of radionuclides and three categories of less importance. The main
categories are: (i) alpha () decay, (ii) beta () decay encompassing three related
decay processes (beta minus, beta plus and electron capture) and (iii) gamma
() decay encompassing two competing decay processes (pure decay and
internal conversion). The three less important radioactive decay categories are:
(i) spontaneous fission, (ii) proton emission decay and (iii) neutron emission
decay.
Nuclides with an excess number of neutrons are referred to as neutron-rich;
nuclides with an excess number of protons are referred to as proton-rich. The
following features are notable:
For a slight protonneutron imbalance in the nucleus, radionuclides decay
by decay characterized by transformation of a proton into a neutron in
+ decay, and transformation of a neutron into a proton in decay.
For a large protonneutron imbalance in the nucleus, the radionuclides
decay by emission of nucleons: particles in decay, protons in proton
emission decay and neutrons in neutron emission decay.
For very large atomic mass number nuclides (A > 230), spontaneous fission,
which competes with decay, is also possible.
Excited nuclei decay to their ground state through decay. Most of these
transformations occur immediately upon production of the excited state by either
or decay; however, a few exhibit delayed decays that are governed by their
own decay constants and are referred to as metastable states (e.g. 99mTc).
Nuclear transformations are usually accompanied by emission of energetic
particles (charged particles, neutral particles, photons, etc.). The particles released
in the various decay modes are as follows:
Alpha particles in decay;
Electrons in decay;
Positrons in + decay;
Neutrinos in + decay;
Antineutrinos in decay;
Gamma rays in decay;
Atomic orbital electrons in internal conversion;
Neutrons in spontaneous fission and in neutron emission decay;
Heavier nuclei in spontaneous fission;
Protons in proton emission decay.
24
where M(P), M(D) and m are the nuclear rest masses (in unified atomic mass
units u) of the parent, daughter and emitted particles, respectively.
For radioactive decay to be energetically possible, the Q value must be
greater than zero. This means that spontaneous radioactive decay processes
release energy and are called exoergic or exothermic. For Q > 0, the energy
equivalent of the Q value is shared as kinetic energy between the particles emitted
in the decay process and the daughter product. Since the daughter generally has a
much larger mass than the other emitted particles, the kinetic energy acquired by
the daughter is usually negligibly small.
1.4.6. Alpha decay
In decay, a radioactive parent nucleus P decays into a more stable
daughter nucleus D by ejecting an energetic particle. Since the particle is a
4
He nucleus ( 42 He 2+ ), in decay the parents atomic number Z decreases by two
and its atomic mass number A decreases by four:
A
A4
4
2+
Z P Z 2 d + 2 he
A4
Z 2 d + a
(1.31)
25
CHAPTER 1
and
222
218
86 Rn
84 Po + a
T = 3.82 d
1/2
(1.32)
Z P Z +1 D + e + e (1.33)
+ e + e (1.34)
+ e (1.35)
PET. The most common tracer for PET studies is fluorodeoxyglucose (FDG)
labelled with 18F which serves as a good example of + decay:
18
18
+
9 F
8O+e
T =110 min
1/2
+ e (1.36)
Z P+e
Z 1 D + e (1.37)
T =60 d
1/2
53 I + e
125
*
52 Te + e
T =60 d
1/2
(1.38)
27
CHAPTER 1
A
ZX+g
A +
ZX
(1.39)
and
A *
ZX
+ e
A
ZX
(1.40)
where
A *
ZX
A
Z X;
A
ZX
*
An example of decay is the transition of an excited 60
28 Ni nucleus,
60
60
resulting from the decay of 27 Co , into stable 28 Ni through an emission of two
rays with energies of 1.17 and 1.33 MeV. An example of internal conversion
*
decay is the decay of excited 125
52 Te which results from an electron capture decay
of 125I into stable 125
52 Te through emission of 35 keV rays (7 %) and internal
conversion electrons (93%).
28
CHAPTER 1
and
dI
= dx (1.42)
I
where the negative sign is used to indicate a decrease in signal I(x) with an
increase in absorber thickness x.
It should be noted that Eqs(1.41) and (1.42) can be considered identical.
30
The form of Eq.(1.41) is identical to the form of Eq.(1.10) that deals with
simple radioactive decay; however, it must be noted that in radioactive decay the
product N(t) is defined as activity A(t), while in photon beam attenuation the
product I(x) does not have a special name and symbol.
Integration of Eq.(1.42) over absorber thickness x from 0 to x and over
intensity I(x) from the initial intensity I(0) (no absorber) to intensity I(x) at
absorber thickness x, gives:
I ( x)
I (0)
dI
=
I
dx (1.43)
0
resulting in:
I ( x) = I (0)e x (1.44)
resulting in:
1
= e x 1/2
2
or
x 1/2 = ln 2 = 0.693
31
CHAPTER 1
and
HVL = x 1/2 =
ln 2
(1.46)
resulting in:
1
= e x
e
or
x = 1
and
MFP = x =
1
(1.48)
resulting in:
1
= e x 1/10
10
32
or
x 1/10 = ln 10 = 2.303
and
TVL = x 1/10 =
ln 10
(1.50)
From Eqs(1.46), (1.48) and (1.50), the linear attenuation coefficient may
be expressed in terms of x1/2, x and x1/10, respectively, as follows:
=
ln 2 1 ln 10
(1.51)
= =
x 1/2 x x 1/10
ln 2
x
= 0.301x 1/10 (1.52)
ln 10 1/10
e
e
e
FIG.1.4. Intensity I(x) against absorber thickness x for a monoenergetic photon beam.
Half-value layer x1/2, mean free path x and tenth-value layer x1/10 are also illustrated. The
area under the exponential attenuation curve from x=0 to x= is equal to the product I(0) x
where I(0) is the initial intensity of the monoenergetic photon beam. The slope of the tangent
to the attenuation curve at x=0 is equal to I(0) and this tangent crosses the abscissa (x) axis
at x = x .
33
CHAPTER 1
where
E tr
h
(1.54)
E ab
h
(1.55)
and
ab =
34
CHAPTER 1
Carbon
Lead
FIG.1.5. Mass attenuation coefficient / against photon energy h in the range from 1 keV
FIG. 1.5. Mass attenuation coefficient / against photon energy h in the range from 1 keV
to 1000 MeV for carbon (a) and lead (b). In addition to the total coefficients /, the individual
to 1000 MeV for carbon (a) and lead (b). In addition to the total coefficients /, the
coefficients
the photoelectric
Rayleigheffect,
scattering,
Compton
scattering
and pair
individualfor
coefficients
for the effect,
photoelectric
Rayleigh
scattering,
Compton
scattering
production
(including
triplet
production)
are
also
shown.
Data
are
from
the
National
and pair production (including triplet production) are also shown. Data Institute
are from the
ofNational
Science and
Technology
(NIST).
Institute
of Science
and Technology (NIST).
As far as the photon fate after the interaction with an atom is concerned, there are two
possible outcomes: (i) the photon disappears and is absorbed completely (photoelectric effect,
A tightly
bound triplet
electron
is an electron
whose
binding
energy
EB is
is
nuclear
pair production,
production,
photonuclear
reaction)
and (ii)
the photon
comparable
larger its
than
or slightly
smaller
than (Rayleigh
the photon
energy or
h.loses
For part
a of
scattered andto,changes
direction
but keeps
its energy
scattering)
its
energy
(Compton
effect).
photon interaction to occur with a tightly bound electron, the binding energy
The most
with smaller
atoms of
the the
absorber
EB of the electron
mustimportant
be of thephoton
order interactions
of, but slightly
than,
photonare: the
Compton effect, photoelectric effect, nuclear pair production, electronic pair production
energy,
i.e. EB h. An
interaction between a photon and a tightly bound electron
(triplet production)
and photonuclear reactions. In some of these interactions, energetic
iselectrons
considered
an
interaction
photon(photoelectric
and the atomeffect,
as a whole.
are released frombetween
absorberaatoms
Compton effect, triplet
production) and electronic vacancies are left in absorber atoms; in other interactions, a portion
of the incident photon energy is used to produce free electrons and positrons. All of these
36
As far as the photon fate after the interaction with an atom is concerned,
there are two possible outcomes: (i) the photon disappears and is absorbed
completely (photoelectric effect, nuclear pair production, triplet production,
photonuclear reaction) and (ii) the photon is scattered and changes its direction
but keeps its energy (Rayleigh scattering) or loses part of its energy (Compton
effect).
The most important photon interactions with atoms of the absorber are:
the Compton effect, photoelectric effect, nuclear pair production, electronic pair
production (triplet production) and photonuclear reactions. In some of these
interactions, energetic electrons are released from absorber atoms (photoelectric
effect, Compton effect, triplet production) and electronic vacancies are left in
absorber atoms; in other interactions, a portion of the incident photon energy
is used to produce free electrons and positrons. All of these light charged
particles move through the absorber and either deposit their kinetic energy in the
absorber (dose) or transform part of it back into radiation through production of
bremsstrahlung radiation.
The fate of electronic vacancies produced in photon interactions with
absorber atoms is the same as the fate of vacancies produced in electron capture
and internal conversion. As alluded to in Section 1.4.11, an electron from a higher
atomic shell of the absorber atom fills the electronic vacancy in a lower shell and
the transition energy is emitted either in the form of a characteristic X ray (also
called a fluorescence photon) or an Auger electron and this process continues
until the vacancy migrates to the outer shell of the absorber atom. A free electron
from the environment will eventually fill the outer shell vacancy and the absorber
ion will revert to a neutral atom in the ground state.
A vacancy produced in an inner shell of an absorber atom migrates to
the outer shell and the migration is accompanied by emission of a series of
characteristic photons and/or Auger electrons. The phenomenon of emission
of Auger electrons from an excited atom is called the Auger effect. Since each
Auger transition converts an initial single electron vacancy into two vacancies,
a cascade of low energy Auger electrons is emitted from the atom. These low
energy electrons have a very short range in tissue but may produce ionization
densities comparable to those produced in an particle track.
The branching between a characteristic photon and an Auger electron is
governed by the fluorescence yield which, as shown in Fig.1.6, for a given
electronic shell, gives the number of fluorescence photons emitted per vacancy
in the shell. The fluorescence yield can also be defined as the probability of
emission of a fluorescence photon for a given shell vacancy. Consequently, as
also shown in Fig.1.6, (1 ) gives the probability of emission of an Auger
electron for a given shell vacancy.
37
Auger electrons. The phenomenon of emission of Auger electrons from an excited atom is
called the Auger effect. Since each Auger transition converts an initial single electron vacancy
into two vacancies, a cascade of low energy
Auger 1electrons is emitted from the atom. These
CHAPTER
low energy electrons have a very short range in tissue but may produce ionization densities
comparable to those produced in an particle track.
FIG.1.6. Fluorescence yields , and M, against atomic number Z of the absorber. Also
FIG. 1.6. Fluorescence yields KK, LL and M
, against atomic number Z of the absorber. Also
shown are
are probabilities
probabilities for
shown
for the
the Auger
Auger effect,
effect, given
given as
as (1
(1 ).
). Data
Data are
arefrom
fromthe
theNational
National
Institute
of
Science
and
Technology
(NIST).
Institute of Science and Technology (NIST).
The branching between a characteristic photon and an Auger electron is governed by
the
fluorescence
yield which,
1.6.5. Photoelectric
effect as shown in Fig. 1.6, for a given electronic shell, gives the
number of fluorescence photons emitted per vacancy in the shell. The fluorescence yield
can also be defined as the probability of emission of a fluorescence photon for a given shell
In Consequently,
the photoelectric
called
the photoeffect),
the
photonof
vacancy.
as alsoeffect
shown(sometimes
in Fig. 1.6, (1
) gives
the probability of
emission
an
Auger
electron
for
a
given
shell
vacancy.
interacts with a tightly bound orbital electron of an absorber atom, the photon
disappears and the orbital electron is ejected from the atom as a so-called
photoelectron, with a kinetic energy EK given as:
where
h is the incident photon energy;
and EB is the binding energy of the ejected photoelectron.
A general diagram of the photoelectric effect is provided (seeFig.1.9(a)).
For the photoelectric effect to happen, the photon energy h must exceed
the binding energy EB of the orbital electron to be ejected and, moreover, the
closer h is to EB, the higher the probability of the photoelectric effect happening.
The photoelectric mass attenuation coefficient / is plotted in Fig.1.5 for carbon
and lead as one of the grey curves representing the components of the total /
attenuation coefficient. The sharp discontinuities in the energy h are called
38
absorption edges and occur when h becomes equal to the binding energy EB of a
given atomic shell. For example, the K absorption edge occurs at h=88 keV in
lead, since the K shell binding energy EB in lead is 88 keV. Absorption edges for
carbon occur at h < 1 keV and, thus, do not appear in Fig.1.5(a).
As far as the photoelectric attenuation coefficient dependence on photon
energy h and absorber atomic number Z is concerned, the photoelectric atomic
attenuation coefficient a goes approximately as Z5/(h)3, while the photoelectric
mass attenuation coefficient / goes approximately as Z4/(h)3.
As evident from Fig.1.5, the photoelectric attenuation coefficient /
is the major contributor to the total attenuation coefficient / at relatively low
photon energies where h is of the order of the K shell binding energy and less
than 0.1 MeV. At higher photon energies, first the Compton effect and then pair
production become the major contributors to the photon attenuation in the absorber.
1.6.6. Rayleigh (coherent) scattering
In Rayleigh scattering (also called coherent scattering), the photon
interacts with the full complement of tightly bound atomic orbital electrons of
an absorber atom. The event is considered elastic in the sense that the photon
loses essentially none of its energy h but is scattered through a relatively
small scattering angle . A general diagram of Rayleigh scattering is given
(seeFig.1.9(b)).
Since no energy transfer occurs from photons to charged particles, Rayleigh
scattering plays no role in the energy transfer attenuation coefficient and energy
absorption coefficient; however, it contributes to the total attenuation coefficient
/ through the elastic scattering process. The Rayleigh atomic attenuation
coefficient aR is proportional to Z2/(h)2 and the Rayleigh mass attenuation
coefficient R/ is proportional to Z/(h)2.
As a result of no energy transfer from photons to charged particles in the
absorber, Rayleigh scattering is of no importance in radiation dosimetry. As far
as photon attenuation is concerned, however, the relative importance of Rayleigh
scattering in comparison to other photon interactions in tissue and tissue
equivalent materials amounts to only a few per cent of the total / but it should
not be neglected.
1.6.7. Compton effect (incoherent scattering)
The Compton effect (also called incoherent scattering or Compton
scattering) is described as an interaction between a photon and a free as well
as stationary electron. Of course, the interacting electron is not free, rather it is
bound to a nucleus of an absorbing atom, but the photon energy h is much larger
39
CHAPTER 1
than the binding energy EB of the electron (EB h), so that the electron is said
to be loosely bound or essentially free and stationary.
In the Compton effect, the photon loses part of its energy to the recoil
(Compton) electron and is scattered as a photon h' through a scattering angle ,
as shown schematically in Fig.1.7. In the diagram, the interacting electron is at
the origin of the Cartesian coordinate system and the incident photon is oriented
in the positive direction along the abscissa (x) axis. The scattering angle is the
angle between the direction of the scattered photon h' and the positive abscissa
axis while the recoil angle is the angle between the direction of the recoil
electron and the positive abscissa axis. A general diagram of the Compton effect
is given (seeFig.1.9(c)).
FIG.1.7. Schematic diagram of the Compton effect in which an incident photon of energy
h=1 MeV interacts with a free and stationary electron. A photon with energy h=0.505 MeV
is produced and scattered with a scattering angle =60.
40
or
h = h '+ E K
(1.57)
h '
cos +
c
me
1
cos (1.58)
c2
h '
sin +
c
me
1
sin (1.59)
c2
where
mec2 is the rest energy of the electron (0.511 MeV);
EK is the kinetic energy of the recoil (Compton) electron;
is the velocity of the recoil (Compton) electron;
and c is the speed of light in a vacuum (3 108 m/s).
From equations describing conservation of energy (Eq.(1.57)) and
conservation of momentum (Eqs(1.58) and (1.59)), the basic Compton equation
(also referred to as the Compton wavelength-shift equation) can be derived and is
expressed as follows:
' = =
h
(1 cos ) = C (1 cos ) (1.60)
m ec
where
is the wavelength of the incident photon (c/);
is the wavelength of the scattered photon (c/ );
is the wavelength shift in Compton effect ( );
and c, defined as c = h /(mec) = 2 c /(mec 2 ) = 0.024 , is the so-called Compton
wavelength of the electron.
From the Compton equation (Eq.(1.60)), it is easy to show that the
scattered photon energy h and the recoil electron kinetic energy EK depend
41
CHAPTER 1
on the incident photon energy h as well as on the scattering angle and are,
respectively, given as:
h '(h , ) = h
1
(1.61)
1 + (1 cos )
and
E KC (h , ) = h h ' = h h
1
(1 cos )
(1.62)
= h
1 + (1 cos )
1 + (1 cos )
where is the incident photon energy h normalized to electron rest energy mec2,
i.e. = h /(mec 2 ).
Using Eq.(1.61), it is easy to show that energies of forward-scattered
photons (=0), side-scattered photons ( = /2) and backscattered photons
(=) are in general given as follows:
h ' =0 = h (1.63)
h ' = =
2
h
1+
(1.64)
and
h ' = =
h
(1.65)
1 + 2
For very large incident photon energies (h ), they are given as:
h ' =0 = h
h ' = = mec 2
2
(1.66)
(1.67)
and
h ' = =
42
m ec 2
(1.68)
2
(1.69)
and
tan =
cot
1+
2
(1.70)
Since the range of is from 0 (forward-scattering) through /2 (sidescattering) to (backscattering), it is noted that the corresponding range of is
from =/2 at =0 through to =(1 + )1 for =/2 to =0 at =.
The Compton electronic attenuation coefficient e C steadily decreases
with increasing h from a theoretical value of 0.665 1024 cm2/electron (known
as the Thomson cross-section) at low photon energies to 0.21 1024 cm2/electron
at h =1 MeV, 0.51 1024 cm2/electron at h=10 MeV, and
0.008 1024 cm2/electron at h=100 MeV.
Since Compton interaction is a photon interaction with a free electron, the
Compton atomic attenuation coefficient a C depends linearly on the absorber
atomic number Z, while the electronic coefficient e C and the mass coefficient
C/ are essentially independent of Z. This independence of Z can be observed in
Fig.1.5, showing that C/ for carbon (Z=6) and lead (Z=82) at intermediate
photon energies (~1 MeV), where Compton effect predominates, are equal to
about 0.1 cm2/electron irrespective of Z.
Equation (1.62) gives the energy transferred from the incident photon
to the recoil electron in the Compton effect as a function of the scattering
angle . The maximum energy transfer to recoil electron occurs when the photon
is backscattered (=) and the Compton maximum energy transfer fraction
(fC)max is then given as:
( f C ) max =
(E KC ) max
2
=
(1.71)
h
1 + 2
h in the Compton graph presented in Fig. 1.8. The figure shows that the fractional energy
transfer to recoil electrons is quite low at low photon energies ( f C = 0.02 at h = 0.01 MeV)
and then slowly rises through f C = 0.44 atCHAPTER
h = 1 MeV
to reach f C = 0.80 at h = 100 MeV
1
and approaches one asymptotically at very high incident photon energies.
FIG.1.8. Maximum and mean fractions of incident photon energy h transferred to the recoil
is quite low at low photon energies ( f C = 0.02 at h=0.01 MeV) and then slowly
risesthe
through
f C =photon
0.44 atenergy
h=1 hMeV
to reach
at h=100
MeV
exceeds
2mfeCc2= =0.80
1.022
MeV, with
meand
c2 being the
When
incident
approaches
one asymptotically
at verythe
highproduction
incident photon
rest energy
of the electron
and positron,
of anenergies.
electronpositron pair in
conjunction with a complete absorption of the incident photon by the absorber atom becomes
1.6.8.
Pair production
energetically
possible.
For the effect to occur, three quantities must be conserved: energy,
charge and momentum. To conserve the linear momentum simultaneously with total energy
incident
photon
energy
h itexceeds
2meoccur
c2=1.022
with electric
and charge, theWhen
effectthe
cannot
occur
in free
space;
can only
in theMeV,
Coulomb
2
the rest (atomic
energy of
the electron
and positron,
field of a mcollision
nucleus
or orbital
electron)the
thatproduction
can take ofupana suitable
ec being partner
pair in conjunction
withTwo
a complete
absorption
of the incident
fraction ofelectronpositron
the momentum carried
by the photon.
types of
pair production
are known:
photon by the absorber atom becomes energetically possible. For the effect to
occur, three quantities must be conserved: energy, charge and momentum. To
conserve the linear momentum simultaneously with total energy and charge,
the effect cannot occur in free space; it can only occur in the Coulomb electric
field of a collision partner (atomic nucleus or orbital electron) that can take up
a suitable fraction of the momentum carried by the photon. Two types of pair
production are known:
If the collision partner is an atomic nucleus of the absorber, the pair
production event is called nuclear pair production and is characterized by
a photon energy threshold slightly larger than two electron rest masses
(2mec2=1.022 MeV).
44
Less
probable,
but is
nonetheless
possible,ofisthe
pair
production
in the
Coulomb
If the
collision
partner
an atomic nucleus
absorber,
the pair
production
event
is called
nuclear
pair production
is characterized
by aThe
photon
energy
threshold
field of
an orbital
electron and
of an
absorber atom.
event
is called
slightly
larger than
electron or
resttriplet
massesproduction
(2mec2 = 1.022
electronic
pair two
production
and MeV).
its threshold photon
2nonetheless possible, is pair production in the Coulomb field of an
Less
probable,
but
energy is 4mec =2.044 MeV.
orbital electron of an absorber atom. The event is called electronic pair production or
triplet production and its threshold photon energy is 4mec2 = 2.044 MeV.
FIG.1.9. Schematic diagrams of the most important modes of photon interaction with atoms of
FIG. 1.9. Schematic diagrams of the most important modes of photon interaction with atoms
an absorber: (a) photoelectric effect; (b) Rayleigh scattering; (c) Compton effect; (d) nuclear
of an
absorber: (a) photoelectric effect; (b) Rayleigh scattering; (c) Compton effect; (d)
pair production; and (e) electronic pair production (triplet production).
nuclear
pair production; and (e) electronic pair production (triplet production).
The two pair production attenuation coefficients, despite having different origins,
45 are
usually dealt with together as one parameter referred to as pair production. The component
that the nuclear pair production contributes usually exceeds 90%. Nuclear pair production and
CHAPTER 1
attenuation coefficient a k and the pair production mass attenuation coefficient / vary
Z, respectively,of
where
Z is the effects
atomic number of the absorber.
approximately
as Z2 andpredominance
1.6.9. Relative
individual
1.6.9. Relative predominance of individual effects
As is evident from the discussion above, photons have several options for
interaction
with from
absorber
atoms. Five
of the
most have
important
As is evident
the discussion
above,
photons
severalphoton
optionsinteractions
for interaction
withare
absorber
Five of theinmost
important
photon
schematically
Fig.1.9.
Nuclear
andinteractions
electronic are
pairshown
production
are
shownatoms.
schematically
in Fig.
1.9. Nuclear
andand
electronic
production
are usually
combined and treated under the
usually
combined
treatedpair
under
the header
pair production.
header pair
production.
The
probability for a photon to undergo any one of the various interaction
The probability for a photon to undergo any one of the various interaction phenomena
phenomena with an absorber depends on the energy h of the photon and the
with an absorber depends on the energy h of the photon and the atomic number Z of the
atomic number Z of the absorber. In general, the photoelectric effect predominates
absorber. In general, the photoelectric effect predominates at low photon energies, the
Compton
effect
at intermediate
energies
and pair
at low
photon
energies, the
Compton
effect
at intermediate
energies
and pair
production
at high photon
energies.
production at high photon energies.
46 Figure 1.10 shows the regions of relative predominance of the three most important
individual effects with h and Z as parameters. The two curves display the points on the (h,
Z) diagram for which C = at low photon energies and for which C = for high photon
energies and, thus, delineate regions of photoelectric effect predominance at low photon
energies, Compton effect predominance at intermediate photon energies and pair production
Figure 1.10 shows the regions of relative predominance of the three most
important individual effects with h and Z as parameters. The two curves display
the points on the (h, Z) diagram for which C= at low photon energies and for
which C= for high photon energies and, thus, delineate regions of photoelectric
effect predominance at low photon energies, Compton effect predominance at
intermediate photon energies and pair production predominance at high photon
energies. Figure 1.10 also indicates how the regions of predominance are affected
by the absorber atomic number. For example, a 100 keV photon will interact with
a lead absorber (Z=82) predominantly through the photoelectric effect and with
soft tissue (Zeff 7.5) predominantly through the Compton effect. A 10 MeV
photon, on the other hand, will interact with lead predominantly through pair
production and with tissue predominantly through the Compton effect.
1.6.10. Macroscopic attenuation coefficients
For a given photon energy h and absorber atomic number Z, the
macroscopic attenuation coefficient and energy transfer coefficient tr are
given as a sum of coefficients for individual photon interactions discussed above
(photoelectric, Rayleigh, Compton and pair production):
=
NA
( + a R + a C + a)
A a
(1.72)
and
tr =
NA
N
+ ( a C ) tr + a tr = A a f PE + a C f C + a f PP (1.73)
A a tr
A
where all parameters are defined in sections dealing with the individual
microscopic effects.
It should be noted that in Rayleigh scattering there is no energy transfer to
charged particles.
The energy absorption coefficient ab (often designated en in the literature)
is derived from tr of Eq.(1.73) as follows:
ab = en = tr (1 g ) (1.74)
where g is the mean radiation fraction accounting for the fraction of the
mean energy transferred from photons to charged particles and subsequently
lost by charged particles through radiation losses. These losses consist of two
47
CHAPTER 1
components: the predominant bremsstrahlung loss and the small, yet not always
negligible, in-flight annihilation loss.
1.6.11. Effects following photon interactions with absorber
and summary of photon interactions
In the photoelectric effect, Compton effect and triplet production, vacancies
are produced in atomic shells of absorber atoms through the ejection of orbital
electrons from atomic shells. For the diagnostic range and megavoltage range
of photons used for diagnosis and treatment of disease with radiation, the shell
vacancies occur mainly in inner atomic shells and are followed by characteristic
radiation or Auger electrons, the probability of the former given by fluorescence
yield (seeFig.1.6).
Pair production and triplet production are followed by the annihilation
of the positron with a free electron producing two annihilation quanta, most
commonly with an energy of 0.511 MeV each and emitted at 180 from each
other to satisfy conservation of energy, momentum and charge.
BIBLIOGRAPHY
ATTIX, F.H., Introduction to Radiological Physics and Radiation Dosimetry, Wiley, New York
(1986).
CHERRY, S.R., SORENSON, J.A., PHELPS, M.E., Physics in Nuclear Medicine, 3rd edn,
Saunders, Philadelphia, PA (2003).
EVANS, R.D., The Atomic Nucleus, Krieger Publishing, Malabar, FL (1955).
HENDEE, W., RITENOUR, E.R., Medical Imaging Physics, 4th edn, Wiley, New York (2002).
JOHNS, H.E., CUNNINGHAM, J.R., The Physics of Radiology, 3rd edn, Thomas, Springfield,
IL (1984).
KHAN, F., The Physics of Radiation Therapy, 4th edn, Lippincott, Williams and Wilkins,
Baltimore, MD (2009).
KRANE, K., Modern Physics, 3rd edn, Wiley, New York (2012).
PODGORSAK, E.B., Radiation Physics for Medical Physicists, 2nd edn, Springer, Heidelberg,
New York (2010).
ROHLF, J.W., Modern Physics from to Z0, Wiley, New York (1994).
48
CHAPTER 2
BASIC RADIOBIOLOGY
R.G. DALE
Department of Surgery and Cancer,
Faculty of Medicine,
Imperial College London,
London, United Kingdom
J. WONDERGEM*
Division of Human Health,
International Atomic Energy Agency,
Vienna
2.1. INTRODUCTION
Radiobiology is the study (both qualitative and quantitative) of the actions of
ionizing radiations on living matter. Since radiation has the ability to cause changes
in cells which may later cause them to become malignant, or bring about other
detrimental functional changes in irradiated tissues and organs, consideration of
the associated radiobiology is important in all diagnostic applications of radiation.
Additionally, since radiation can lead directly to cell death, consideration of the
radiobiological aspects of cell killing is essential in all types of radiation therapy.
2.2. RADIATION EFFECTS AND TIMESCALES
At the microscopic level, incident rays or particles may interact with
orbital electrons within the cellular atoms and molecules to cause excitation or
ionization. Excitation involves raising a bound electron to a higher energy state,
but without the electron having sufficient energy to leave the host atom. With
ionization, the electron receives sufficient energy to be ejected from its orbit and
to leave the host atom. Ionizing radiations (of which there are several types) are,
thus, defined through their ability to induce this electron ejection process, and
49
CHAPTER 2
the irradiation of cellular material with such radiation gives rise to the production
of a flux of energetic secondary particles (electrons). These secondary particles,
energetic and unbound, are capable of migrating away from the site of their
production and, through a series of interactions with other atoms and molecules,
give up their energy to the surrounding medium as they do so.
This energy absorption process gives rise to radicals and other chemical
species and it is the ensuing chemical interactions involving these which are the
true causatives of radiation damage. Although the chemical changes may appear to
operate over a short timescale (~105s), this period is nonetheless a factor of ~1018
longer than the time taken for the original particle to traverse the cell nucleus. Thus,
on the microscopic scale, there is a relatively long period during which chemical
damage is inflicted (Table2.1).
It is important to note that, irrespective of the nature of the primary
radiation (which may be composed of particles and/or electromagnetic waves),
the mechanism by which energy is transferred from the primary radiation beam to
biological targets is always via the secondary electrons which are produced. The
initial ionization events (which occur near-instantaneously at the microscopic level)
are the precursors to a chain of subsequent events which may eventually lead to the
clinical (macroscopic) manifestation of radiation damage.
Expression of cell death in individual lethally damaged cells occurs
later, usually at the point at which the cell next attempts to enter mitosis. Gross
(macroscopic and clinically observable) radiation effects are a result of the
wholesale functional impairment that follows from lethal damage being inflicted to
large numbers of cells or critical substructures. The timescale of the whole process
may extend to months or years. Thus, in clinical studies, any deleterious health
effects associated with a radiation procedure may not be seen until long after the
diagnostic test or treatment has been completed (Table2.1).
TABLE2.1. THE TIMESCALES OF RADIATION EFFECTS
Action
Approximate timescale
1018 s
1015 s
1010 s
109 s
Chemical changes
105 s
50
Hoursmonths
Hoursyears
BASIC RADIOBIOLOGY
CHAPTER 2
they behave exactly as do the electrons created following the passage of a ray,
giving up their energy (usually of the order of several hundred kiloelectronvolts)
to other atoms and molecules through a series of collisions.
For radionuclides which emit both particles and photons, it is usually
the particulate radiation which delivers the greatest fraction of the radiation dose
to the organ which has taken up the activity. For example, about 90% of the dose
delivered to the thyroid gland by 131I arises from the component. On the other
hand, the emissions contribute more significantly to the overall whole body
dose.
2.3.1.3. Alpha particles
Alpha radiation is emitted when heavy, unstable nuclides undergo decay.
Alpha particles consist of a helium nucleus (two protons combined with two
neutrons) emitted in the process of nuclear decay. The particles possess
approximately 7000 times the mass of a particle and twice the electronic charge,
and give up their energy over a very short range (<100m). Alpha particles
usually possess energies in the megaelectronvolt range, and because they lose this
energy in such a short range are biologically very efficacious, i.e. they possess a
high linear energy transfer (LET; see Section 2.6.3) and are associated with high
relative biological effectiveness (RBE; see Section 2.6.4).
2.3.1.4. Auger electrons
Radionuclides which decay by electron capture or internal conversion
leave the atom in a highly excited state with a vacancy in one of the inner shell
electron orbitals. This vacancy is rapidly filled by either a fluorescent transition
(characteristic X ray) or non-radiative (Auger) transition, in which the energy
gained by the electron transition to the deeper orbital is used to eject another
electron from the same atom. Auger electrons are very short range, low energy
particles that are often emitted in cascades, a consequence of the inner shell atomic
vacancy that traverses up through the atom to the outermost orbital, ejecting
additional electrons at each step. This cluster of very low energy electrons can
produce ionization densities comparable to those produced by an particle track.
Thus, radionuclides which decay by electron capture and/or internal conversion
can exhibit high LET-like behaviour close (within 2 nm) to the site of the decay.
52
BASIC RADIOBIOLOGY
FIG.2.1. Illustration of the difference between direct and indirect damage to cellular DNA.
53
CHAPTER 2
54
BASIC RADIOBIOLOGY
CHAPTER 2
BASIC RADIOBIOLOGY
fractional cell survival is plotted on the logarithmic vertical axis. Each of the
individual points on the graph represents the fractional survival of cells resulting
from delivery of single acute doses of the specified radiation, which in this case
is assumed to be radiation. (In the context of the subject, an acute dose of
radiation may be taken to mean one which is delivered at high dose rate, i.e. the
radiation delivery is near instantaneous.) Mammalian cell survival curves plotted
in this way are associated with two main characteristics: a finite initial slope (at
zero dose) and a gradually increasing slope as dose is increased.
Surviving fraction
Dose (Gy)
FIG.2.2. A radiation cell survival curve plots the fraction of plated cells retaining colony
forming ability (cell surviving fraction) versus radiation absorbed dose.
57
CHAPTER 2
TABLE
2.2. THE LINEAR ENERGY TRANSFER OF DIFFERENT
RADIATIONS
Radiation type
60
0.2
250kVp X rays
2.0
10MeV protons
4.7
2.5MeV particles
166
1MeV electrons
0.25
10keV electrons
2.3
1keV electrons
12.3
Co rays
58
Surviving fraction
BASIC RADIOBIOLOGY
Dose (Gy)
FIG.2.3. The relative biological effectiveness of a radiation is defined as the ratio of the
dose required to produce the same reduction in cell survival as a reference low linear energy
transfer (LET) radiation.
If the respective low and high LET isoeffective doses are dL and dH, then:
RBE =
dL
(2.1)
dH
where the suffixes L and H again respectively refer to the low and high LET
instances.
Figure2.4 shows an example of how the RBEs determined at any particular
end point (cell surviving fraction) vary with changing dose for a given radiation
fraction size for a low LET radiation. The maximum RBE (RBEmax) occurs
at zero dose and, in terms of microdosimetric theory, corresponds to the ratio
59
CHAPTER 2
between the respective high and low LET linear radiosensitivity constants, H
and L, i.e.:
H
L
(2.4)
RBE
RBE max =
Dose/fraction
FIG.2.4. Relative biological effectiveness (RBE) as a function of the radiation dose per
fraction.
H
(2.5)
L
and the working RBE at any given dose per fraction is given as:
2
2
+ 4d L RBE min
( / )L RBE max + ( / )L RBE max
( / )L + d L (2.6)
RBE =
2 ( / )L + d L
60
BASIC RADIOBIOLOGY
when expressed in terms of the low LET dose per fraction dL or:
RBE =
2
2
( / )L + ( / )L + 4d H ( / )L RBE max + RBE min
d H
2d H
(2.7)
when expressed in terms of the high LET dose per fraction dH.
Figure2.4 was derived using RBEmax =5, RBEmin=1 and (/)L=3 Gy,
but the general trend of a steadily falling RBE with increasing dose per fraction
is independent of the chosen values. Clearly, the assumption of a fixed value
of RBE, if applied to all fraction sizes, could lead to gross clinical errors and
Eqs(2.6) and (2.7) make the point that determination of RBEs in a clinical setting
is potentially complex and will depend on accurate knowledge of RBEmax and (if
it is not unity) RBEmin. Although there is not yet clear evidence over whether
or not there is a consistent trend for RBEmin to be non-unity, the possibility is
nevertheless important as it may hold very significant implications.
Figure2.4 also shows schematically how the rate of change of RBE with
changing dose per fraction is influenced by the existence of a non-unity RBEmin
parameter. Even for a fixed value of RBEmax, the potential uncertainty in the RBE
values at the fraction sizes likely to be used clinically might themselves be very
large if RBEmin is erroneously assumed to be unity. These uncertainties would be
compounded if there were an additional linkage between RBEmax and the tissue
/ value.
As is seen from Eqs(2.6) and (2.7), the RBE value at any particular dose
fraction size will also be governed by the low LET / ratio (a tissue dependent
parameter which provides a measure of how tissues respond to changes in dose
fractionation) and the dose fraction size (a purely physical parameter) at the point
under consideration. Finally, and as has been shown through the earlier clinical
experience with neutron therapy, the RBEmax value may itself be tissue dependent,
likely being higher for the dose-limiting normal tissues than for tumours. This
tendency is borne out by experimental evidence using a variety of ion species as
well as by theoretical microdosimetric studies. This potentially deleterious effect
may be offset by the fact that, in carbon-, helium- and argon-ion beams, LET
(and, hence, RBE) will vary along the track in such a way that it is low at the
entry point (adjacent to normal tissues) and highest at the Bragg peak located
in the tumour. However, although this might be beneficial, it does mean that the
local RBE is more spatially variable than is indicated by Eq.(2.6).
Owing to the difficulties in setting reference doses at which clinical
inter-comparisons could be made more straightforward, Wambersie proposed
that a distinction be made between the reference RBE and the clinical
61
CHAPTER 2
BASIC RADIOBIOLOGY
than in the acute case, but that the initial slope remains unchanged. When the
doses are all delivered at a very low dose rate, as is the case for most radionuclide
therapies, the response is essentially a straight line, when the curves are plotted
on a log-linear scale, as is common practice for radiation survival curves. This
means that the low dose response is purely exponential.
Surviving fraction
Dose (Gy)
FIG.2.5. Surviving fraction as a function of dose for different dose rates. It is important
to note that most radionuclide therapies are delivered at low dose rate in the range of
0.10.5 Gy/h, when the survival curve is almost linear.
where (in units of Gy1) and (in units of Gy2) are the respective linear and
quadratic sensitivity coefficients.
63
CHAPTER 2
If the treatment is repeated in N well spaced fractions, then the net survival
is SN, where:
S N = S N = exp(N d N d 2 ) (2.9)
= Nd
Nd 2
(2.10)
( / )
ln S N
d
(2.11)
= Nd 1 +
( / )
Although the parameters and are rarely known in detail for individual
tumours or tissues, values of the ratio / (in units of grays) are becoming
increasingly known from clinical and experimental data. In general, / is
systematically higher (520 Gy) for tumours than for critical, late-responding
normal tissues (25 Gy) and it is this difference which provides the BED concept
with much of its practical usefulness.
64
BASIC RADIOBIOLOGY
d g (t )
(2.12)
BED = Nd 1 +
( / )
2 1 exp( t )
1
(2.13)
0.693
(2.14)
T1/2
2 R 1 exp( t )
BED = Rt 1 +
1
(2.15)
( / )
t
2 R
BED = Rt 1 +
(2.16)
( / )
CHAPTER 2
66
BASIC RADIOBIOLOGY
67
CHAPTER 2
BASIC RADIOBIOLOGY
effects which need to be considered as potential side effects from diagnostic uses
of radionuclides, although deterministic damage may result if the embryo or fetus
is irradiated. For radionuclide therapy applications, the concerns relate to both
stochastic and deterministic effects.
2.8. SPECIAL RADIOBIOLOGICAL CONSIDERATIONS
IN TARGETED RADIONUCLIDE THERAPY
2.8.1. Radionuclide targeting
Tumour targeted radiotherapy is a very promising approach for the
treatment of wide-spread metastasis and disseminated tumour cells. This
technique aims to deliver therapeutic irradiation doses to the tumour while
sparing normal tissues by targeting a structure that is abundant in tumour cells,
but rare in normal tissues. This can be done by using antibodies labelled with
a therapeutic relevant radionuclide acting against a specific tumour target.
Radiolabelled antibody therapy has already become common in the treatment of
non-Hodgkins lymphoma, e.g. 131I-tositumomab (Bexxar) and 90Y-ibritumomab
tiuxetan (Zevalin), and exhibits great potential for being extended to other
diseases. A good example is epidermal growth factor (EGF) labelled with 125I
which will bind EGF receptors. EGF receptors are overexpressed on tumour cells
in many malignancies such as highly malignant gliomas. At present, several other
radiolabelled antibodies are being used in experimental models and in clinical
trials to study their feasibility in other types of cancer.
2.8.2. Whole body irradiation
Conventional external beam radiotherapy involves controlled irradiation of
a carefully delineated target volume. Normal structures adjacent to the tumour
will likely receive a dose, in some cases a moderately high dose, but the volumes
involved are relatively small. The rest of the body receives only aminimal dose,
mostly arising from radiation scattered within the patient from the target volume
and from a small amount of leakage radiation emanating from the treatment
machine outside the body.
Targeted radionuclide therapies are most commonly administered
intravenously and, thus, can give rise to substantial whole body doses and, in
particular, doses to the radiation sensitive bone marrow. Once the untargeted
activity is removed from the blood, it may give rise to substantial doses in
normal structures, especially the kidneys. Furthermore, the activity taken up by
69
CHAPTER 2
the kidneys and targeted tumour deposits may (if ray emissions are involved)
continue to irradiate the rest of the body.
2.8.3. Critical normal tissues for radiation and radionuclide therapies
Since the radiation doses used in radionuclide therapies are much higher
than the doses used for diagnosis, (prolonged) retention of the pharmaceuticals
within the blood circulation and, hence, increased accumulation of radionuclides
in non-tumour cells, might lead to unwanted toxicities. The bone marrow, kidney
and liver are regarded as the main critical organs for systemic radionuclide therapy.
Other organs at risk are the intestinal tract and the lungs. The bone marrow is
very sensitive towards ionizing radiation. Exposure of the bone marrow with high
doses of radiation will lead to a rapid depression of white blood cells followed
a few weeks later by platelet depression, and in a later stage (approximately one
month after exposure) also by depression of the red blood cells. In general, these
patients could suffer from infections, bleeding and anaemia. Radiation damage to
the gastrointestinal tract is characterized by de-population of the intestinal mucosa
(usually between 3 and 10 days) leading to prolonged diarrhoea, dehydration,
loss of weight, etc. The kidneys, liver and lungs will show radiation induced
damage several months after exposure. In kidneys, a reduction of proximal tubule
cells is observed. These pathological changes finally lead to nephropathy. In the
liver, hepatocytes are the radiosensitive targets. Since the lifespan of the cells is
about a year, deterioration of liver function will become apparent between 3 and
9 months after exposure. In lungs, pulmonary damage is observed in two waves:
an acute wave of pneumonitis and later fibrosis.
The determinants of normal tissue response from radionuclide studies
is a large subject due to the diversity of radiopharmaceuticals with differing
pharmacokinetics and biodistribution, and the widely differing responses and
tolerances of the critical normal tissues. A principal determinant of the type of
toxicity depends on the radionuclide employed. For example, isotopes of iodine
localize in the thyroid (unless blocked), salivary glands, stomach and bladder.
Strontium, yttrium, samarium, fluorine, radium, etc. concentrate in bone.
Several radiometals, such as bismuth, can accumulate in the kidney. If these
radionuclides are tightly conjugated to a targeting molecule, the biodistribution
and clearance are determined by that molecule. For high molecular weight
targeting agents, such as an antibody injected intravenously, the slow plasma
clearance results in marrow toxicity being the principal dose limiting organ. For
smaller radiolabelled peptides, renal toxicity becomes of concern. When studying
a new radiopharmaceutical or molecular imaging agent, it is always important
to perform a detailed study of the biodistribution at trace doses, to ensure the
70
BASIC RADIOBIOLOGY
CHAPTER 2
72
CHAPTER 3
RADIATION PROTECTION
S.T. CARLSSON
Department of Diagnostic Radiology,
Uddevalla Hospital,
Uddevalla, Sweden
J.C. LE HERON
Division of Radiation, Transport and Waste Safety,
International Atomic Energy Agency,
Vienna
3.1. INTRODUCTION
Medical exposure is the largest human-made source of radiation exposure,
accounting for more than 95% of radiation exposure. Furthermore, the use of
radiation in medicine continues to increase worldwide more machines are
accessible to more people, the continual development of new technologies and
new techniques adds to the range of procedures available in the practice of
medicine, and the role of imaging is becoming increasingly important in day
to day clinical practice. The introduction of hybrid imaging technologies, such
as positron emission tomography/computed tomography (PET/CT) and single
photon emission computed tomography (SPECT)/CT, means that the boundaries
between traditional nuclear medicine procedures and X ray technologies are
becoming blurred. Worldwide, the total number of nuclear medicine examinations
is estimated to be about 35 million per year.
In Chapter2, basic radiation biology and radiation effects were described,
demonstrating the need for a system of radiation protection. Such a system allows
the many beneficial uses of radiation to be utilized, but at the same time ensures
detrimental radiation effects are either prevented orminimized. This can be
achieved by having the objectives of preventing the occurrence of deterministic
effects and of limiting the probability of the stochastic effects to a level that is
considered acceptable. In a nuclear medicine facility, consideration needs to
be given to the patient, the staff involved in performing the nuclear medicine
procedures, members of the public and other staff that may be in the nuclear
medicine facility, carers and comforters of patients undergoing procedures,
73
CHAPTER 3
74
RADIATION PROTECTION
The ICRP then puts exposure of individuals into three categories medical
exposure, occupational exposure and public exposure:
Medical exposure refers primarily to exposure incurred by patients for
the purpose of medical diagnosis or treatment. It also refers to exposures
incurred by individuals helping in the support and comfort of patients
undergoing diagnosis or treatment, and by volunteers in a programme of
biomedical research involving their exposure.
Occupational exposure is the exposure of workers incurred in the course of
their work.
Public exposure is exposure incurred by members of the public from all
exposure situations, but excluding any occupational or medical exposure.
All three need to be considered in the nuclear medicine facility.
An individual person may be subject to one or more of these categories of
exposure, and for radiation protection purposes such exposures are dealt with
separately.
The ICRP system has three fundamental principles of radiological
protection, namely:
The principle of justification: Any decision that alters the radiation exposure
situationshould do more good than harm.
The principle of optimization of protection: The likelihood of incurring
exposures, the number of people exposed and the magnitude of their
individual doses should all be kept as low as reasonably achievable
(ALARA), taking into account economic and societal factors.
The principle of limitation of doses: The total dose to any individual
from regulated sources in planned exposure situations other than medical
exposure of patients should not exceed the appropriate limits recommended
by the ICRP. Recommended dose limits are given in Table3.1.
In a nuclear medicine facility, occupational and public exposures are subject
to all three principles, whereas medical exposure is subject to the first two only.
More detail on the application of the ICRP system for radiological protection as
it applies to a nuclear medicine facility is given in the remainder of this chapter.
75
CHAPTER 3
Occupational
Public
1 mSv in a yearc
20 mSv
500 mSv
500 mSv
15 mSv
50 mSv
Limits on effective dose are for the sum of the relevant effective doses from external
exposure in the specified time period and the committed effective dose from intakes of
radionuclides in the same period. For adults, the committed effective dose is computed for a
50 year period after intake, whereas for children it is computed for the period up to reaching
70 years of age.
With the further provision that the effective dose should not exceed 50 mSv in any single
year. Additional restrictions apply to the occupational exposure of pregnant women.
In special circumstances, a higher value of effective dose could be allowed in a single year,
provided that the average over 5 years does not exceed 1 mSv/a.
In 2011, the ICRP recommended that the occupational dose limit be lowered from the
previous 150 mSv/a to 20 mSv/a, averaged over 5 years, and with no more than 50 mSv in
any single year.
The limitation on effective dose provides sufficient protection for the skin against stochastic
effects.
Averaged over a 1cm2 area of skin regardless of the area exposed.
RADIATION PROTECTION
(3.1)
where
mT is the mass of the organ or tissue T;
and T is the total energy imparted by radiation to that tissue or organ.
The International System of Units (SI) unit of mean organ dose is joules per
kilogram (J/kg) which is termed gray (Gy).
Owing to the fact that different types of ionizing radiation will have
different effectiveness in damaging human tissue at the same dose, and the fact
77
CHAPTER 3
that the probability of stochastic effects will depend on the tissue irradiated, it is
necessary to introduce quantities to account for these factors. Those quantities
are equivalent dose and effective dose. Since they are not directly measurable,
the International Commission on Radiation Units and Measurements (ICRU)
has defined a set of operational quantities for radiation protection purposes (area
monitoring and personal monitoring): the ambient dose equivalent, directional
dose equivalent and personal dose equivalent.
Regarding internal exposure from radionuclides, the equivalent dose
and the effective dose are not only dependent on the physical properties of the
radiation but also on the biological turnover and retention of the radionuclide.
This is taken into account in the committed dose quantities (equivalent and
effective).
3.2.3.1. Equivalent dose
It is a well known fact in radiobiology that densely ionizing radiation such
as particles and neutrons will cause greater harm to a tissue or organ than
rays and electrons at the same mean absorbed dose. This is because the dense
ionization events will result in a higher probability of irreversible damage to the
chromosomes and a lower chance of tissue repair. To account for this, the organ
dose is multiplied with a radiation weighting factor in order to get a quantity that
more closely reflects the biological effect on the irradiated tissue or organ. This
quantity is called the equivalent dose and is defined as:
H T = wr DT,r (3.2)
where
DT,R is the mean tissue or organ dose delivered by type R radiation;
and wR is the radiation weighting factor.
For X rays, rays and electrons, wR =1; for particles, wR =20. The SI unit of
equivalent dose is joules per kilogram (J/kg), which is termed sievert (Sv). In a
situation of exposure from different types of radiation, the total equivalent dose is
the sum of the equivalent dose from each type of radiation.
78
RADIATION PROTECTION
TH T
(3.3)
t0 +
t0
H T (t ) dt (3.4)
where
t0
79
CHAPTER 3
T H T ( )
(3.5)
80
RADIATION PROTECTION
CHAPTER 3
can only be achieved if the people involved regard the rules and regulations as
necessary, and are a support to and not a hindrance in their daily work. Every
individual should also know their responsibilities through formal assignment of
duties.
3.3.2. Responsibilities
3.3.2.1. Licensee and employer
The licensee of a nuclear medicine facility, through the authorization issued
by the regulatory body, has the prime responsibility for applying the relevant
national regulations and meeting the conditions of the licence. The licensee may
appoint other people to carry out actions and tasks related to these responsibilities,
but the licensee retains overall responsibility. In particular, the nuclear medicine
physician, the medical physicist, the nuclear medicine technologist, the
radiopharmacist and the radiation protection officer (RPO) all have key roles
and responsibilities in implementing radiation protection in a nuclear medicine
facility, and these are discussed in more detail below.
The BSS need to be consulted for details on all of the requirements for
radiation protection that are assigned to licensees. Employers are also assigned
many responsibilities, in cooperation with the licensee, for occupational radiation
protection. Key responsibilities for the licensee include ensuring that the
necessary personnel (nuclear medicine physicians, medical physicists, nuclear
medicine technologists, radiopharmacists and an RPO) are appointed, and that
the individuals have the necessary education, training and competence to perform
their respective duties. Clear responsibilities for personnel must be assigned; a
radiation protection programme (RPP) must be established and the necessary
resources provided; a comprehensive quality assurance (QA) programme must
be established; and education and training of personnel supported.
3.3.2.2. Nuclear medicine specialist
The general medical and health care of the patient is, of course, the
responsibility of the individual physician treating the patient. However, when the
patient presents in the nuclear medicine facility, the nuclear medicine specialist
has the particular responsibility for the overall radiation protection of the patient.
This means responsibility for the justification of a given nuclear medicine
procedure for the patient, in conjunction with the referring medical practitioner,
and responsibility for ensuring the optimization of protection in the performance
of the examination or treatment.
82
RADIATION PROTECTION
83
CHAPTER 3
84
RADIATION PROTECTION
CHAPTER 3
RADIATION PROTECTION
if it has left the facility in a patient. The regulatory body should promptly be
informed in cases of lost or stolen sources.
When a radioactive source is not in use, it should always be stored. In a
nuclear medicine facility, the sources are generally stored in the room where
preparation of radiopharmaceuticals is undertaken. Storage of sources is further
discussed in Chapter9.
It is necessary to consider the possible consequences of an accidental fire
and to take steps tominimize the risk of this. Careful selection of non-flammable
construction materials when building the storage facility will greatly reduce this
hazard. The storage facility should not be used to hold any highly flammable or
highly reactive materials. Liaison with the local firefighting authority is necessary
and their advice should be sought regarding provision of firefighting equipment
in the vicinity of the radioactive waste store.
3.4.4. Structural shielding
Structural shielding should be considered in a busy facility where large
activities are handled and where many patients are waiting and examined. In a
PET/CT facility, structural shielding is always necessary and the final design will
generally be determined by the PET application because of the high activities
used and because of the high energy of the annihilation radiation. Careful
calculations should be performed to ensure the need and construction of the
barrier. Such calculations should include not only walls but also the floor and
ceiling, and must be made by a qualified medical physicist. Radiation surveys
should always be performed to ensure the correctness of the calculations.
The correct design of protective barriers is of the utmost importance not
only from a protection but also from an economic point of view. If the basic
calculations are wrong, it will become very expensive to correct the mistakes
later when the whole construction is completed. It is, therefore, very important
that a qualified expert, such as a medical physicist, be consulted in the planning
stage.
3.4.5. Classification of workplaces
With regard to occupational exposure, the BSS require the classification of
workplaces as controlled areas or as supervised areas.
In a controlled area, individuals follow specific protective measures to
control radiation exposures. It will be necessary to designate an area as controlled
if it is difficult to predict doses to individual workers or if individual doses
may be subject to wide variations. The controlled area must be delineated and
87
CHAPTER 3
RADIATION PROTECTION
all relevant regulations, is considered and planned for at the early stages of any
projects involving radioactive materials. It is the responsibility of the licensee to
provide safe management of the radioactive waste. It should be supervised by the
RPO and local rules should be available.
Containers to allow segregation of different types of radioactive waste
should be available in areas where the waste is generated. The containers must
be suitable for the purpose (volume, shielding, being leakproof, etc.). Each type
of waste should be kept in separate containers that are properly labelled to supply
information about the radionuclide, physical form, activity and external dose rate.
A room for interim storage of radioactive waste should be available. The
room should be locked, properly marked and, if necessary, ventilated. Flammable
waste should be placed separately. It is essential that all waste be properly packed
in order to avoid leakage during storage. Biological waste should be refrigerated
or put in a freezer. Records should be kept, so that the origin of the waste can be
identified.
The final disposal of the radioactive waste produced in the nuclear medicine
facility includes several options: storage for decay and disposal as cleared waste
into the sewage system (aqueous waste), through incineration or transfer to a
landfill site (solid waste), or transfer of sources to the vendor or to a special waste
disposal facility outside of the hospital.
For many of the wastes generated in hospitals, storage for decay is a useful
option because the radionuclides generally have short half-lives. This can be
done in the hospital and may include some treatment of the wastes to ensure
safe storage. Other types of waste containing radionuclides with longer half-lives
must be transferred to a special waste treatment, storage and disposal facility
outside of the hospital. One option is to return the source to the vendor. This is an
attractive option for radionuclide generators and might also be useful for sealed
sources used in a quality control programme. The option of returning the source
should be provided for in the purchase process.
For diagnostic patients there is generally no need for collection of excreta.
Ordinary toilets can be used. For therapy patients, there are different policies in
different countries, either to use separate toilets equipped with delay tanks or an
active treatment system, or to allow the excreta to be released directly into the
sewage system. This is further discussed in Chapter20.
3.5. OCCUPATIONAL EXPOSURE
Detailed requirements for protection against occupational exposure are
given in Section 3 of the BSS, and recommendations on how to meet these
requirements are given in IAEA Safety Guides [3.53.7]. All of these safety
89
CHAPTER 3
RADIATION PROTECTION
CHAPTER 3
RADIATION PROTECTION
CHAPTER 3
these patients, the accumulated radiation dose can be significant. These workers
should, whenever possible, maximize their distance from the patient and spend as
little time as possible in close proximity to the patient.
In summary, the following protective approaches can reduce external
exposure significantly:
For preparation and dispensing of radiopharmaceuticals, working behind a
lead glass bench shield, and using shielded vials and syringes;
For administration of radiopharmaceuticals to patients, using lead aprons in
the case of prolonged injection and high activity, and using a syringe shield;
During examinations, when the distance to the patient is short, using a
movable transparent shield.
3.5.6. Personal monitoring
The licensee and employer have the joint responsibility to ensure that
appropriate personal monitoring is provided to staff. This normally means that
the RPO would specify which workers need to be monitored routinely, the type
of monitoring device to be used and the body position where the monitor should
be worn, bearing inmind that some countries may have specific regulatory
requirements on these issues. Further, the regulatory body is likely to have
specified the monitoring period and the time frame for reporting monitoring
results.
Staff to be monitored in a nuclear medicine facility should include all those
who work routinely with radionuclides or with the patients who have received
administrations of radiopharmaceuticals. This will include nursing staff who
either work routinely in nuclear medicine or nurse patients who have received
radionuclide therapy and staff dealing with excreta from radionuclide therapy.
Monitoring would not normally be extended to those that come into occasional
contact with nuclear medicine patients.
There are several types of external personal dosimetry systems and the
system to use is dependent on national or local conditions. In many countries,
the service is centralized to the regulatory body or provided through third party
personal dosimetry providers. Occasionally, some large hospitals have their own
personal dosimetry service. In all cases, the dosimetry provider must be approved
by the regulatory body.
Finger monitoring should be carried out occasionally on staff that regularly
prepare and administer radioactive substances to patients, and also when
setting up an operation which requires the routine handling of large quantities
of radionuclides. After handling unsealed radionuclides, the hands should be
monitored. It may, therefore, be convenient to mount a suitable contamination
94
RADIATION PROTECTION
monitor near the sink where hands are washed. Care should be taken to ensure
that the monitor itself does not become contaminated. In high background areas,
it will be necessary to shield the detector, and it may be convenient to have a foot
or elbow operated switch to activate the monitor.
Monitoring for internal contamination is rarely necessary in nuclear
medicine on radiation protection grounds but it may be useful in providing
reassurance to staff. The circumstances in which internal monitoring becomes
advisable are those where staff use significant quantities of 131I for thyroid therapy.
They should be included in a programme of thyroid uptake measurements.
In other circumstances where it is necessary to assess the intake of emitting
radionuclides (e.g. after a serious incident), the use of a whole body counter may
be appropriate. Such equipment should be available at national referral centres.
The possible use of an uncollimated gamma camera should also be considered.
Sometimes, a more detailed monitoring survey may be indicated if staff
doses have increased (or it is anticipated that they may do so in the future) as a
result of either the introduction of new examinations or procedures, or a change
in the nuclear medicine facilitys equipment. The RPO should decide who should
be monitored and at which monitoring sites.
Individual monitoring results must be analysed and records must be kept. It
is vital that the individual monitoring results are regularly assessed and the cause
of unusually high dosimeter readings should be investigated by the RPO, with
ensuing corrective actions where appropriate. The administrative arrangements,
the scope and nature of the individual monitoring records, and the length of time
for which records have to be kept may differ among countries.
3.5.7. Monitoring of the workplace
The BSS require licensees to develop programmes for monitoring the
workplace. Such programmes are described in Section 3.4.6 and in Chapters 9
and 20.
3.5.8. Health surveillance
According to the BSS, the licensee needs to make arrangements for
appropriate health surveillance in accordance with the rules established by
the national regulatory body. The primary purpose of health surveillance is to
assess the initial and continuing fitness of employees for their intended tasks.
The health surveillance programme should be based on the general principles of
occupational health.
No specific health surveillance related to exposure to ionizing radiation is
necessary for staff involved in nuclear medicine procedures. Only in the case of
95
CHAPTER 3
overexposed workers at doses much higher than the dose limits would special
investigations involving biological dosimetry and further extended diagnosis and
medical treatment be necessary.
Counselling should be available to workers such as women who are
or may be pregnant, individual workers who have or may have been exposed
substantially in excess of dose limits and workers who may be worried about
their radiation exposure.
3.5.9. Local rules and supervision
According to the BSS, employers and licensees must, in consultation with
the workers or through their representatives:
Establish written local rules and procedures necessary to ensure adequate
levels of protection and safety for workers and other persons;
Include in the local rules and procedures the values of any relevant
investigation level or authorized level, and the procedure to be followed in
the event that any such value is exceeded;
Make the local rules and procedures, the protective measures and safety
provisions known to those workers to whom they apply and to other persons
who may be affected by them;
Ensure that any work involving occupational exposure be adequately
supervised and take all reasonable steps to ensure that the rules, procedures,
protective measures and safety provisions be observed.
These local rules should include all working procedures involving unsealed
sources in the facility such as:
Ordering radionuclides;
Unpacking and checking the shipment;
Storage of radionuclides;
General rules for work in controlled and supervised areas;
Preparation of radiopharmaceuticals;
Personal and workplace monitoring;
In-house transport of radionuclides;
Management of radioactive waste;
Administration of radiopharmaceuticals to the patients;
Protection issues in patient examinations and treatments;
Routine cleaning of facilities;
Decontamination procedures;
Care of radioactive patients.
96
RADIATION PROTECTION
CHAPTER 3
rooms and toilets for injected and not injected patients should be considered in
order tominimize both external exposure and the spread of contamination.
3.6.3. Exposure from patients
Every precaution must be taken to ensure that the doses received by
individuals who come close to a patient or who spend some time in neighbouring
rooms remain below the dose limit for the public and below any applicable dose
constraint. For almost all diagnostic procedures, the maximum dose that could be
received by another person due to external exposure from the patient is a fraction
of the annual public dose limit and it should not normally be necessary to issue any
special radiation protection advice to the patient. One exception is restrictions on
breast-feeding a baby, which will be further discussed in Section 3.7.2.4. Another
exception is an intensive use of positron emitters which may require structural
shielding based on the exposure of the public as discussed above (Section 3.4.4).
For patients who have undergone radionuclide therapy, specific advice should be
given regarding restrictions on their contact with other people. This is discussed
separately in Chapter20.
3.6.4. Transport of sources
One possible source of exposure of the general public is transport of
sources. It is performed both inside and outside the nuclear medicine facility.
Inside the facility, the transport includes distribution of the radioactive sources
from the storage area to where it will be used. Such transport should be limited
as far as possible by the facility design. The transport that takes place should
be performed according to optimized radiation protection conditions as given by
local rules.
The transport of radioactive sources to and from the nuclear medicine
facility should follow the internationally accepted IAEA Regulations for the Safe
Transport of Radioactive Material [3.8]. These Regulations include basic rules
for the transport itself and regulations about the shape and labelling of packages.
In general, the package is built in several parts. It should be mechanically
safe and reduce the effect of potential fire and water damage. The package should
be labelled with a sign. There are three different labels: IWhite, IIYellow and
IIIYellow. In all cases, the radionuclide and its activity should be specified. The
label gives some indication of the dose rate D at the surface of the package:
Category IWhite
D 0.005 mSv/h
Category IIYellow 0.005 < D 0.5 mSv/h
Category IIIYellow 0.5 < D 2 mSv/h
98
RADIATION PROTECTION
A more exact figure of the radiation around the package is given by the
transport index which is the maximum dose rate (mSv/h) at a distance 1 m from
the surface of the package multiplied by a hundred.
3.7. MEDICAL EXPOSURE
The detailed requirements given in Section 3 of the BSS are applicable to
medical exposure in nuclear medicine facilities. Furthermore, Ref.[3.9] describes
strategies to involve organizations outside the regulatory framework, such as
professional bodies (nuclear medicine physicians, medical physicists, nuclear
medicine technologists, radiopharmacists), whose cooperation is essential to
ensure compliance with the BSS requirements for medical exposures. Examples
that may illustrate this point include the adoption of protocols for calibration
of unsealed sources and for QA and for reporting accidental medical exposure.
Reference [3.3] provides further specific advice. A summary of the most relevant
issues for nuclear medicine is given in this section.
3.7.1. Justification of medical exposure
The BSS state that:
Medical exposures shall be justified by weighing the expected diagnostic
or therapeutic benefitsthat they yield against the radiation detriment
that they might cause, with account taken of the benefits and the risks of
available alternative techniques that do not involve medical exposure.
The principle of justification of medical exposure should not only be applied
to nuclear medicine practice in general but also on a case by case basis, meaning
that any examination should be based upon a correct assessment of the indications
for the examination, the actual clinical situation, the expected diagnostic and
therapeutic yields, and the way in which the results are likely to influence the
diagnosis and the medical care of the patient. The nuclear medicine specialist
has the ultimate responsibility for the control of all aspects of the conduct and
extent of nuclear medicine examinations, including the justification of the given
procedure for a patient. The nuclear medicine specialist should advise and make
decisions on the appropriateness of examinations and determine the techniques
to be used. In justifying a given diagnostic nuclear medicine procedure, relevant
international or national guidelines should be taken into account.
Any nuclear medicine procedure that occurs as part of a biomedical
research project (typically as a tool to quantify changes in a given parameter
99
CHAPTER 3
RADIATION PROTECTION
avoid failure to obtain the required diagnostic information; failure would result
in unnecessary (and, therefore, unjustified) irradiation and may also necessitate
repetition of the test.
If more than one radiopharmaceutical can be used for a procedure,
consideration should be given to the physical, chemical and biological properties
for each radiopharmaceutical, so as tominimize the absorbed dose and other risks
to the patient while at the same time providing the desired diagnostic information.
Other factors affecting the choice include availability, shelf life, instrumentation
and relative cost. It is also important that the radiopharmaceuticals used are
received from approved manufacturers and distributors, and are produced
according to national and international requirements. This is a requirement also
for in-house production of radiopharmaceuticals for PET studies.
The activity administered to a patient should always be determined and
recorded. Knowing the administered activity makes it possible to estimate the
absorbed dose to different organs as well as the effective dose to the patient.
Substantial reduction in absorbed dose from radiopharmaceuticals can be
achieved by simple measures such as hydration of the patient, use of thyroid
blocking agents and laxatives.
3.7.2.2. Optimization of protection in procedures
The nuclear medicine procedure starts with the request for an examination or
treatment. The request should be written and contain basic information about the
patients condition. This information should help the nuclear medicine specialist
to decide about the most appropriate method to use and to decide how urgent
the examination is. The patient should then be scheduled for the examination or
treatment and be informed about when and where it will take place. Some basic
information about the procedure should also be given, especially if it requires
some preparation of the patient, such as fasting. These initial measures require
an efficient and reliable administrative system. In parallel to these routines, the
nuclear medicine facility has to ensure that the radiopharmaceutical to be used is
available at the time of the scheduled procedure.
When the patient appears in the nuclear medicine facility, they should
be correctly identified using the normal hospital or clinic routines. The patient
should be informed about the procedure and have the opportunity to ask questions
about it. A fully informed and motivated patient is the basis for a successful
examination or treatment. Before the administration of the radiopharmaceutical,
the patient should be interviewed about possible pregnancy, small children at
home, breast-feeding and other relevant questions which might have implications
for the procedure. Before administration, the technologist or doctor should
check the request and ensure that the right examination or treatment is scheduled
101
CHAPTER 3
and that the right radiopharmaceutical and the right activity are dispensed. If
everything is in order, the administration can proceed. The administered activity
should always be recorded for each patient.
While most adults can maintain a required position without restraint
or sedation during nuclear medicine examinations, it may be necessary to
immobilize or sedate children, so that the examination can be completed
successfully. Increasing the administered activity to reduce the examination time
is an alternative that can be used in elderly patients with pain.
Optimization of protection in an examination means that equipment should
be operated within the conditions established in the technical specifications, thus
ensuring that it will operate satisfactorily at all times, in terms of both the tasks to
be accomplished and radiation safety. More details are given in Chapters 8 and 15.
Particular procedural considerations for children, pregnant women and lactating
women are given in the following subsections.
Optimization of protection in radionuclide therapy means that a correctly
calculated and measured activity should be administered to the patient in order
to achieve the prescribed absorbed dose in the organ(s) of interest, while the
radioactivity in the rest of the body is kept as low as reasonably achievable.
Optimization also means using routines to avoid accidental exposures of the
patient, the staff and members of the general public. Radionuclide therapy is
further discussed in Chapter20.
The availability of a written manual of all procedures carried out by the
facility is highly desirable. The manual should regularly be revised as part of a
QA programme.
3.7.2.3. Pregnant women
Special consideration should be given to pregnant women exposed to
ionizing radiation due to the larger probability of inducing radiation effects in
individuals exposed in utero compared to exposed adults. As a basic rule, it is
recommended that diagnostic and therapeutic nuclear medicine procedures
of women likely to be pregnant be avoided unless there are strong clinical
indications.
In order to avoid unintentional irradiation of the unborn child, a female of
childbearing age should be evaluated regarding possible pregnancy or a missed
period. This should be done when interviewing and informing the woman prior to
the examination or treatment. It is also common to place a poster in the waiting
area requesting a woman to notify the staff if she is or thinks she is pregnant.
If the patient is found not to be pregnant without any doubt, the examination
or treatment can be performed as planned. If pregnancy is confirmed,
careful consideration should be given to other methods of diagnosis or to the
102
RADIATION PROTECTION
CHAPTER 3
RADIATION PROTECTION
a general guide, activities less than 10% of the normal adult activity should not
be administered.
In hybrid imaging, the CT protocol should be optimized by reducing the
tube currenttime product (mAs) and tube potential (kV) without compromising
the diagnostic quality of the images. Careful selection of slice width and pitch as
well as scanning area should also be done. It is important that individual protocols
based on the size of the child are used. The principles behind such protocols
should be worked out by the medical physicist and the responsible specialist.
Since the examination times in nuclear medicine examinations are quite
long, there may be problems in keeping the child still during the examination.
Even small body motions can severely interfere with the quality of the
examination and make it useless. There are several methods of mechanical
support to fasten the child. Drawing the childs attention to something else such
as a television programme can also be useful for older children. Sometimes, even
sedation or general anaesthesia may be necessary.
3.7.2.6. Calibration
The licensee of a nuclear medicine facility needs to ensure that a dose
calibrator or activity meter is available for measuring activity in syringes or vials.
The validity of measurements should be ensured by regular quality control of
the instrument, including periodic reassessment of its calibration, traceable to
secondary standards.
3.7.2.7. Clinical (patient) dosimetry
The licensee of a nuclear medicine facility should ensure that appropriate
clinical dosimetry by a medical physicist is performed and documented. For
diagnostic nuclear medicine, this should include representative typical patient
doses for common procedures. For therapeutic nuclear medicine, this needs to
be for each individual patient, and includes absorbed doses to relevant organs or
tissues.
3.7.2.8. Diagnostic reference levels
Many investigations have shown a large spread of administered activities
for a certain type of diagnostic nuclear medicine examination between different
hospitals within a country, even if the equipment used is similar in performance.
Even though no dose limits are applied to medical exposure, the process of
optimization should result in about the same administered activity for the same
type of examination and for the same size of patient.
105
CHAPTER 3
RADIATION PROTECTION
CHAPTER 3
108
RADIATION PROTECTION
Patients involved
Request and scheduling
Identification at arrival
Information
Administration of
radiopharmaceutical
Waiting
Examination
Workers involved
Ordering of sources
Unauthorized ordering
Storage of sources
Loss of sources
Handling of sources
Contamination of facility
Radioactive waste
Radioactive patient
CHAPTER 3
needed to prevent recurrence and implement the corrective measures. There may
be a requirement to report the event to the regulatory body.
A well established RPP is fundamental in accident prevention together with
a high level of safety culture in the organization and among the people working
in a nuclear medicine facility. The content of an RPP as well as the importance of
well established working procedures in order to protect patients, workers and the
general public have been discussed in the sections above. It should be stressed
that documentation of the procedures used in the facility is also important in
accident prevention. Other important factors are a well working QA programme
and a programme for continuing education and training which includes not only
the normal practices, but also accidental situations and lessons learned from
accidents.
3.8.2. Emergency plans
According to the BSS, the licensee needs to prepare emergency procedures
on the basis of events identified by the safety assessment. The procedures should
be clear, concise and unambiguous, and need to be posted visibly in places where
their need is anticipated. An emergency plan needs to, as aminimum, list and
describe:
Predictable incidents and accidents, and measures to deal with them;
The persons responsible for taking actions, with full contact details;
The responsibilities of individual personnel in emergency procedures
(nuclear medicine physicians, medical physicists, nuclear medicine
technologists, etc.);
Equipment and tools necessary to carry out the emergency procedures;
Training and periodic drills;
The recording and reporting system;
Immediate measures to avoid unnecessary radiation doses to patients, staff
and the public;
Measures to prevent access of persons to the affected area;
Measures to prevent spread of contamination.
The most likely accident in a nuclear medicine facility is contamination
of workers, patients, equipment and facilities. It can range from small to very
large spillages of radioactivity, for example, serious damage to the technetium
generator or spillage of several gigabecquerels of 131I. The procedures of cleaning
up a small amount of contamination should be known and practised by every
technologist in the facility. The cleaning procedures in cases of more severe
contamination should always be supervised by the RPO. Local rules should be
110
RADIATION PROTECTION
111
CHAPTER 3
112
RADIATION PROTECTION
quality control mechanisms and procedures for reviewing and assessing the
overall effectiveness of protection and safety measures.
It is a common and growing practice that hospitals or clinics implement a
quality management system for all of the medical care received in diagnosis and
treatment, i.e. covering the overall nuclear medicine practice. The QA programme
envisaged by the BSS should be part of the wider facility quality management
system. In the hospital or clinic, it is common to include QA as part of the RPP
or, conversely, to include the RPP as part of a more general QA programme for
the hospital or clinic. Regardless of its organization, it is important that radiation
protection is an integral part of a system of quality management. The remainder
of this section considers aspects of QA applied to a nuclear medicine facility that
are covered in the BSS. Specific details with respect to medical exposure are
covered in Section 3.7.2.9.
An effective QA programme requires a strong commitment from the
nuclear medicine facilitys management to provide the necessary resources of
time, personnel and budget. It is recommended that the nuclear medicine facility
establish a group that actively works with QA issues. Such a QA committee
should have a representative from management, a nuclear medicine physician, a
medical physicist, a nuclear medicine technologist and an engineer as members.
The QA committee should meet regularly and review the different components of
the programme.
The QA programme should cover the entire process from the initial decision
to adopt a particular procedure through to the interpretation and recording
of results, and should include ongoing auditing, both internal and external, as
a systematic control methodology. The maintenance of records is an important
part of QA. One important aspect of any QA programme is continuous quality
improvement. This implies a commitment of the staff to strive for continuous
improvement in the use of unsealed sources in diagnosis and therapy, based
on new information learned from their QA programme and new techniques
developed by the nuclear medicine community at large. Feedback from
operational experience and lessons learned from accidents or near misses can
help identify potential problems and correct deficiencies, and should, therefore,
be used systematically, as part of the continuous quality improvement.
A QA programme should cover all aspects of diagnosis and therapy,
including:
The prescription of the procedure by the medical practitioner and its
documentation (supervising if there is any error or contraindication);
Appointments and patient information;
Clinical dosimetry;
Optimization of examination protocol;
113
CHAPTER 3
114
RADIATION PROTECTION
REFERENCES
[3.1] INTERNATIONAL COMMISSION ON RADIOLOGICAL
Recommendations of the ICRP, Publication 103, Elsevier (2008).
PROTECTION,
115
CHAPTER 3
BIBLIOGRAPHY
EUROPEAN COMMISSION, European Guidelines on Quality Criteria for Computed
Tomography, Rep. EUR 16262 EN, Office for Official Publications of the European
Communities, Brussels (1999).
INTERNATIONAL ATOMIC ENERGY AGENCY (IAEA, Vienna)
Quality Control of Nuclear Medicine Instruments 1991, IAEA-TECDOC-602 (1991).
Radiological Protection for Medical Exposure to Ionizing Radiation, IAEA Safety Standards
Series No. RS-G-1.5 (2002).
IAEA Quality Control Atlas for Scintillation Camera Systems (2003).
Quality Assurance for Radioactivity Measurement in Nuclear Medicine, Technical Reports
Series No. 454 (2006).
Radiation Protection in Newer Medical Imaging Techniques: PET/CT, Safety Reports Series
No. 58 (2008).
Quality Assurance for PET and PET/CT Systems, IAEA Human Health Series No. 1 (2009).
Quality Assurance for SPECT Systems, IAEA Human Health Series No. 6 (2009).
Quality Management Audits in Nuclear Medicine Practices (2009).
Radiation Protection of Patients (RPoP),
https://rpop.iaea/RPOP/RPOP/Content/index.htm
INTERNATIONAL COMMISSION ON RADIOLOGICAL PROTECTION
Radiological Protection of the Worker in Medicine and Dentistry, Publication 57, Pergamon
Press, Oxford and New York (1989).
Radiological Protection in Biomedical Research, Publication 62, Pergamon Press, Oxford and
New York (1991).
Radiological Protection in Medicine, Publication 105, Elsevier (2008).
Pregnancy and Medical Radiation, Publication 84, Pergamon Press, Oxford and New York
(2000).
INTERNATIONAL COMMISSION ON RADIATION UNITS AND MEASUREMENTS,
Quantities and Units in Radiation Protection Dosimetry, ICRU Rep. 51, Bethesda MD (1993).
MADSEN, M.T., et al., AAPM Task Group 108: PET and PET/CT shielding requirements,
Med. Phys. 33 (2006) 1.
SMITH, A.H., HART, G.C. (Eds), INSTITUTE OF PHYSICAL SCIENCES IN MEDICINE,
Quality Standards in Nuclear Medicine, IPSM Rep. No. 65, York (1992).
116
CHAPTER 4
RADIONUCLIDE PRODUCTION
H.O. LUNDQVIST
Department of Radiology, Oncology and Radiation Science,
Uppsala University,
Uppsala, Sweden
4.1. THE ORIGINS OF DIFFERENT NUCLEI
All matter in the universe has its origin in an event called the big bang,
a cosmic explosion releasing an enormous amount of energy about 14 billion
years ago. Scientists believe that particles such as protons and neutrons, which
form the building blocks of nuclei, were condensed as free particles during the
first seconds. With the decreasing temperature of the expanding universe, the
formation of particle combinations such as deuterium (heavy hydrogen) and
helium occurred. For several hundred million years, the universe was plasma
composed of hydrogen, deuterium, helium ions and free electrons. As the
temperature continued to decrease, the electrons were able to attach to ions,
forming neutral atoms and converting the plasma into a large cloud of hydrogen
and helium gas. Locally, this neutral gas slowly condensed under the force of
gravity to form the first stars. As the temperature and the density in the stars
increased, the probability of nuclear fusion resulting in the production of heavier
elements increased, culminating in all of the elements in the periodic table that
we know today. As the stars aged, consuming their hydrogen fuel, they eventually
exploded, spreading their contents of heavy materials around the universe.
Owing to gravity, other stars formed with planets around them, composed of
these heavy elements. Four and a half billion years have passed since the planet
Earth was formed. In that time, most of the atomic nuclei consisting of unstable
protonneutron combinations have undergone transformation (radioactive
decay) to more stable (non-radioactive) combinations. However, some with very
long half-lives remain: 40K, 204Pb, 232Th and the naturally occurring isotopes of
uranium.
The discovery of these radioactive atoms was first made by Henri Becquerel
in 1896. The chemical purification and elucidation of some of the properties
of radioactive substances was further investigated by Marie Skodowska-Curie
and her husband Pierre Curie. Since some of these long lived radionuclides
generated more short lived radionuclides, a new scientific tool had been
117
CHAPTER 4
discovered that was later found to have profound implications in what today is
known as nuclear medicine. George de Hevesy was a pioneer in demonstrating
the practical uses of the new radioactive elements. He and his colleagues used a
radioactive isotope of lead, 210Pb, as a tracer (or indicator) when they studied the
solubility of sparingly soluble lead salts. De Hevesy was also the first to apply
the radioactive tracer technique in biology when he investigated lead uptake in
plants (1923) using 212Pb. Only one year later, Blumengarten and Weiss carried
out the first clinical study, when they injected 212Bi into one arm of a patient and
measured the arrival time in the other arm. From this study, they concluded that
the arrival time was prolonged in patients with heart disease.
4.1.1. Induced radioactivity
In the beginning, nature was the supplier of the radioactive nuclides
used. Isotopes of uranium and thorium generated a variety of radioactive
heavy elements such as lead, but radioactive isotopes of light elements were
not known. Marie Curies daughter Irne, together with her husband Frdric
Joliot took the next step. Alpha emitting sources had long been used to bombard
different elements, for example, by Ernest Rutherford who studied the deflection
of particles in gold foils. The large deflections observed in this work led to
the conclusion that the atom consisted of a tiny nucleus of protons with orbiting
electrons (similar to planets around the sun). However, JoliotCurie also showed
that the particles induced radioactivity in the bombarded foil (in their case,
aluminium foil). The induced radioactivity had a half-life of about 3min. They
identified the emitted radiation to be from 30P created in the nuclear reaction
27
Al(, n)30P.
They also concluded that:
These elements and similar ones may possibly be formed in different
nuclear reactions with other bombarding particles: protons, deuterons
and neutrons. For example, 13N could perhaps be formed by capture of a
deuteron in 12C, followed by the emission of a neutron.
The same year, this was proved to be true by Ernest Lawrence in Berkeley,
California and Enrico Fermi in Rome. Lawrence had built a cyclotron capable
of accelerating deuterons up to about 3MeV. He soon reported the production
of 13N with a half-life of 10min. Thereafter, the cyclotron was used to produce
several other biologically important radionuclides such as 11C, 32P and 22Na.
Fermi realized that the neutron was advantageous for radionuclide production.
Since it has no charge, it could easily enter into the nucleus and induce a nuclear
reaction. He immediately made a strong neutron source by sealing up 232Ra gas
118
RADIONUCLIDE PRODUCTION
with beryllium powder in a glass vial. The particle emitted from 232Ra caused a
nuclear reaction in beryllium and a neutron was emitted, 9Be(, n)12C.
Fermi and his research group started a systematic search by irradiating all
available elements in the periodic system with fast and slow neutrons to study
the creation of induced radioactivity. From hydrogen to oxygen, no radioactivity
was observed in their targets, but in the ninth element, fluorine, their hopes were
fulfilled. In the following weeks, they bombarded some 60 elements and found
induced radioactivity in 40 of them. They also observed that the lighter elements
were usually transmuted into radionuclides of a different chemical element,
whereas heavier elements appeared to yield radioisotopes of the same element
as the target.
These new discoveries excited the scientific community. From having been
a rather limited technique, the radioactivity tracer principle could suddenly be
applied in a variety of fields, especially in life sciences. De Hevesy immediately
started to study the uptake and elimination of 32P phosphate in various tissues of
rats and demonstrated, for the first time, the kinetics of vital elements in living
creatures. Iodine-128 was soon after applied in the diagnosis of thyroid disease.
This was the start of the radiotracer technology in biology and medicine as
we know it today.
One early cyclotron produced nuclide of special importance was 11C
since carbon is fundamental in life sciences. Carbon-11 had a half-life of only
20min but by setting up a chemical laboratory close to the cyclotron, organic
compounds labelled with 11C were obtained in large amounts. Photosynthesis
was studied using 11CO2 and the fixation of carbon monoxide in humans by
inhaling 11CO. However, 20min was a short half-life and the use of 11C was
limited to the most rapid biochemical reactions. It must be remembered that the
radio-detectors used at that time were primitive and that the chemical, synthetic
and analytical tools were not adapted to such short times. A search to find a more
long lived isotope of carbon resulted in the discovery in 1939 of 14C produced in
the nuclear reaction 13C(d, p)14C.
Unfortunately, 14C produced this way was of limited use since the
radionuclide could not be separated from the target. However, during the
bombardments, a bottle of ammonium nitrate solution had been standing close
to the target. By pure chance, it was discovered that this bottle also contained
14
C, which had been produced in the reaction 14N(n, p)14C.
The deuterons used in the bombardment consist of one proton and one
neutron with a binding energy of about 2MeV. When high energy deuterons
hit a target, it is likely that the binding between the particles breaks and that a
free neutron is created in what is called a stripping reaction. The bottle with
ammonium nitrate had, thus, unintentionally been neutron irradiated. Since no
carbon was present in the bottle (except small amounts from solved airborne
119
CHAPTER 4
carbon dioxide), the 14C produced this way was of high specific radioactivity.
It was also very easy to separate from the target. In the nuclear reaction, a
hot carbon atom was created, which formed 14CO2 in the solution. By simply
bubbling air through the bottle, the 14C was released from the target.
The same year, tritium was discovered by deuteron irradiation of water.
One of the pioneers Martin Kamen stated:
Within a few months, after the scientific world had somewhat ruefully
concluded that development of tracer techniques would be seriously
handicapped because useful radioactive tracers for carbon, hydrogen,
oxygen and nitrogen did not exist, 14C and 3H were discovered.
Before the second world war, the cyclotron was the main producer of
radionuclides since the neutron sources at that time were very weak. However,
with the development of the nuclear reactor, that situation changed. Suddenly,
a strong neutron source was available, which could easily produce almost
unlimited amounts of radioactive nuclides including biologically important
elements, such as 3H, 14C, 32P and 35S, and clinically interesting radionuclides,
such as 60Co (for external radiotherapy) and 131I, for nuclear medicine. After
the war, a new industry was born which could deliver a variety of radiolabelled
compounds for research and clinical use at a reasonable price.
However, accelerator produced nuclides have a special character, which
makes them differ from reactor produced nuclides. Today, their popularity is
increasing again. Generally, reactor produced radionuclides are most suitable for
laboratory work, whereas accelerator produced radionuclides are more useful
clinically. Some of the most used radionuclides in nuclear medicine, such as
111
In, 123I and 201Tl, and the short lived radionuclides, 11C, 13N, 15O and 18F, used
for positron emission tomography (PET), are all cyclotron produced.
4.1.2. Nuclide chart and line of nuclear stability
During the late 19th century, chemists learned to organize chemical
knowledge into the periodic system. Radioactivity, when it was discovered,
conflicted with that system. Suddenly, various samples, apparently with the
same chemical behaviour, were found to have different physical qualities such
as half-life, emitted type of radiation and energy. The concept of isotopes or
elements occupying the same place in the periodic system (from the Greek
(isos topos) meaning same place) was introduced by Soddy
1913, but a complete explanation had to await the discovery of the neutron by
Chadwick in 1932.
120
RADIONUCLIDE PRODUCTION
Number of protons
NUCLIDE CHART
Number of neutrons
FIG.4.1. Chart of nuclides. The black dots represent 279 naturally existing combinations of
protons and neutrons (stable or almost stable nuclides). There are about 2300 proton/neutron
combinations that are unstable around this stable line.
Figure4.2 shows a limited part of the nuclide chart. The formal notation
for an isotope is ZA X , where X is the element name (e.g. C for carbon), A is the
mass number (A=Z + N), Z is the number of protons in the nucleus (atom
number) and N the number of neutrons in the nucleus.
The expression above is overdetermined. If the element name X is known,
so is the number of protons in the nucleus, Z. Therefore, the simplified notation
A
X is commonly used.
Some relations of the numbers of protons and neutrons have special names
such as:
Isotopes: the number of protons is constant (Z=constant).
Isotones: the number of neutrons is constant (N=constant).
Isobars: The mass number is constant (A=constant).
121
CHAPTER 4
Number of protons
17F
18F
19F
20F
21F
22F
23F
24F
25F
13O
14O
15O
16O
17O
18O
19O
20O
21O
22O
23O
24O
12N
13N
14N
15N
16N
17N
18N
19N
20N
21N
22N
23N
11C
12C
13C
14C
15C
16C
17C
18
19C
20C
10B
11B
12B
13B
14B
15B
11Be
12Be
13
14
15
16
9C
8B
7Be
8Be
9Be
10Be
6Li
7Li
8Li
9Li
2
1
1H
3He
4He
2H
3H
10C
6He
17B
14Be
11Li
8He
n
0
10
11
12
Number of neutrons
FIG.4.2. A part of the nuclide chart where the lightest elements are shown. The darkened
fields represent stable nuclei. Nuclides to the left of the stable ones are radionuclides deficient
in neutrons and those to the right, rich in neutrons.
In the nuclide chart (Fig.4.1), the stable nuclides fall along a monotonically
increasing line called the stability line. The stability of the nucleus is determined
by competing forces: the strong force that binds the nucleons (protons and
neutrons) together and the Coulomb force that repulses particles of like charge,
e.g. protons. The interplay between the forces is illustrated in Fig.4.3.
For best stability, the nucleus has an equal number of protons and
neutrons. This is a quantum mechanic feature of bound particles and in Fig.4.1
this is illustrated by a straight line. It is also seen that the stability line follows
the straight line for the light elements but that there is considerable deviation
(neutron excess) for the heavier elements. The explanation is the large Coulomb
force in the heavy elements which have many protons in close proximity. By
diluting the charge by non-charged neutrons, the distance between the charges
increases and the Coulomb force decreases.
122
RADIONUCLIDE PRODUCTION
FIG.4.3. Between the proton and a neutron, there is a nuclear force that amounts to
2.225MeV. The nucleons form a stable combination called deuterium, an isotope of hydrogen.
In a system of two protons, the nuclear force is equally strong to a neutronproton, but the
repulsive Coulomb forces are stronger. Thus, this system cannot exist. The nuclear force
between two neutrons is equally strong and there is no Coulomb force. Nevertheless, this
system cannot exist due to other repulsive forces, a consequence of the rules of pairing quarks.
123
CHAPTER 4
to produce radionuclides, mainly those that are based upon thermal neutrons, have
positive Q-values but reactions based on positive particles usually have negative
Q-values, e.g. extra energy needs to be added to get the reaction going.
4.1.4. Types of nuclear reaction, reaction channels and cross-section
As seen in Fig.4.1, the radionuclides to the right of the stability line have
an excess of neutrons compared to the stable elements and they are preferentially
produced by irradiating a stable nuclide with neutrons. The radionuclides to
the left are neutron deficient or have an excess of charge and, hence, they are
mainly produced by irradiating stable elements by a charged particle, e.g. p or d.
Although these are the main principles, there are exceptions.
Usually, the irradiating particles have a large kinetic energy that is
transferred to the target nucleus to enable a nuclear reaction (the exception
being thermal neutrons that can start a reaction by thermal diffusion). Figure4.4
shows schematically an incoming beam incident upon the target, where it may
be scattered and absorbed. It can transfer its energy totally or partly to the target
nucleus and can interact with parts of or the whole of the target nucleus. Since the
produced activity should be high, the target is also usually thick.
d
I0Nd d/d
I0
Incident beam
Target
Scattered particles
FIG.4.4. Target irradiation. A nuclear physicist is usually interested in the particles coming
out, their energy and angular distribution, but the radiochemist is mainly interested in the
transformed nuclides in the target.
RADIONUCLIDE PRODUCTION
FIG.4.5. General cross-sectional behaviour for nuclear reactions as a function of the incident
particle energy. Since the proton has to overcome the Coulomb barrier, there is a threshold that
is not present for the neutron. Even very low energy neutrons can penetrate into the nucleus to
cause a nuclear reaction.
CHAPTER 4
FIG.4.6. A schematic figure showing some reaction channels upon proton irradiation.
RADIONUCLIDE PRODUCTION
FIG.4.7. A schematic view of particle energy variations of a cross-section for direct nuclear
reactions and for forming a compound nucleus.
CHAPTER 4
128
RADIONUCLIDE PRODUCTION
Nuclear reaction
Thermal
59
60
Co(n, ) Co
14
N(n, p)14C
33
33
S(n, p) P
35
Cl(n, )32P
Fast
Half-life T1/2
Cross-section (mb)
5.3 a
2000
5730 a
1.75
25 d
0.015
24 d
0.05
24
15 h
1.2
Cl(n, )32P
14 d
6.1
24
Mg(n, p) Na
35
Nuclear reactions with thermal neutrons are attractive for many reasons.
The yields are high due to large cross-sections and the high thermal neutron fluxes
available in the reactor. In some cases, the yields are sufficiently high to use these
reactions as the source of charged secondary particles, e.g. 6Li(n, )3H for the
production of high energy 3H ions, which can then be used for the production of
18
F by 16O(3H, n)18F. The target used is 6LiOH, in which the produced 3H ions
will be in close contact with the target 16O. A drawback of this production is that
when the target is dissolved the solution is heavily contaminated with 3H water
that might be difficult to remove. Today, with an increasing number of hospital
based accelerators, there is little need of neutron produced 18F.
Another reactor produced neutron deficient radionuclide is 125I:
124
CHAPTER 4
excited levels), instead of the more common prompt (1014 s) emission. Such
radioisotopes are suitable for nuclear medicine imaging, since they principally
yield radiation, with some electron emission, a consequence of internal
conversion. The most commonly used radionuclide in nuclear medicine, 99mTc,
is of this type. The m after the atomic mass signifies that this is the metastable
version of the radionuclide.
In radionuclide therapy, in contrast to diagnostic applications, the emission
of high energy radiation is desirable. Most radionuclides for radiotherapy are,
therefore, reactor produced. Examples include 90Y, 131I and 177Lu. A case of
interest to study is 177Lu, which can be produced in two different ways using
thermal neutrons. The most common production route is still the (n, ) reaction
on 176Lu, which opposes two conventional wisdoms in practical radionuclide
production for biomolecular labelling:
(a) Not to use a production that yields the same product element as the target
since it will negatively affect the labelling ability due to the low specific
radioactivity;
(b) Not to use a target that is radioactive.
However, 176Lu is a natural radioactive isotope of lutetium with an
abundance of 2.59%. Figure4.8 shows how 177Lu needs to be separated from
the dominant 175Lu to decrease the mass of the final product. This method of
production works because the high cross-section (2020 b) of 176Lu results in a
high fraction of the target atoms being converted to 177Lu, yielding an acceptable
specific radioactivity of the final product.
On the right of Fig.4.8, an indirect way to produce 177Lu from 176Yb is also
shown. This method of production utilizes a generator nuclide 177Yb, produced
by an (n, ) reaction, which then decays to 177Lu. In principle, by chemically
separating lutetium from ytterbium, one would obtain the highest possible specific
radioactivity. However, the chemical separation between two lanthanides is not
trivial and, thus, it is difficult to obtain 177Lu without substantial contamination of
the target material Yb that may compete in the labelling procedure. Furthermore,
the cross-section for this reaction is almost a thousandfold lower, resulting in a
much lower product yield.
Reactions involving fast neutrons usually have cross-sections that are of the
order of millibarns, which, coupled with the much lower neutron flux at higher
energy relative to thermal neutron fluxes, leads to lower yields. However, there
are some important radionuclides, e.g. 32P that have to be produced this way.
Figure4.9 gives the details of this production.
130
RADIONUCLIDE PRODUCTION
T1/2
Abundance (%)
(mb)
175Lu
Stable
97.41
176Lu
3.78 1010 a
2.59
2020
177Lu
6.734 d
178Lu
28.4 min
175Lu
Stable
97.41
174Yb
Stable
31.8
T1/2
Abundance (%)
(mb)
176Lu
3.78 1010 a
2.59
2020
175Yb
4.185 d
177Lu
6.734 d
178Lu
28.4 min
176Yb
Stable
12.7
2.85
177Yb
1.911 h
31.8
FIG. 4.8. Production of 177Lu from 176Lu (left) and from 176Yb (right).
32S
Stable
95.02
varying
Cross-section31P
T1/2
Stable
Abundance (%)
100
(mb)
T1/2
Abundance (%)
(mb)
33S
Stable
0.75
varying
32
32P S
14.26 d
34S
Stable
4.21
varying
(n,p)3332PP
25.4 d
T1/2
Abundance (%)
(mb)
T1/2
Abundance (%)
(mb)
32S
Stable
95.02
varying
31P
Stable
100
33S
Stable
0.75
varying
32P
14.26 d
34S
Stable
4.21
varying
33P
25.4 d
Neutron energy
FIG. 4.9. Data for the production of 32P in the nuclear reaction 32S(n, p)32P. The reaction
threshold is 0.51 MeV. From the cross-section data, it can be seen that there is no substantial
yield until an energy of about 2 MeV. The yield is an integration of the cross-section data and
the neutron energy spectrum. A practical cross-section can be calculated to about 60 mb.
are marked in Fig. 4.10. Some medically important radionuclides are produced
by fission, such as 90Y (therapy) and 99mTc (diagnostic). They are not produced
directly but by a generator system:
90
131
CHAPTER 4
Mass 134
Mass 99
Mass number
FIG.4.10. The yield of fission fragments as a function of mass.
The primary radionuclides produced are then 90Sr and 99Mo, or more
precisely the mass numbers 90 and 99.
Another important fission produced radionuclide in nuclear medicine,
both for diagnostics and therapy, is 131I. The practical fission cross-section for
this production is the fission cross-section of 235U multiplied by the fraction
of fragments having a mass of 131 or 5860.029=17 b. The probability of
producing a mass of 131 is 2.9% per fission. Iodine-131 is the only radionuclide
with a mass of 131 that has a half-life of more than 1 h, meaning that all of the
others will soon have decayed to 131I.
4.3. ACCELERATOR PRODUCTION
Charged particles, unlike neutrons, are unable to diffuse into the nucleus,
but need to have sufficient kinetic energy to overcome the Coulomb barrier.
However, charged particles are readily accelerated to kinetic energies that open
up more reaction channels than fast neutrons in a reactor. An example is seen in
Fig.4.11 that also illustrates alternative opportunities with: p, d, 3He and 4He or
, to produce practical and economic nuclear reactions.
132
RADIONUCLIDE PRODUCTION
127
124
123
Te(p, n) 123 i
122
Te(d, n) 123 i
124
121
121
123
FIG.4.11. Various nuclear reactions that produce 123I. All of the reactions have been tried
and can be performed at relatively low particle energies. The 123Xe produced in the first two
reactions decays to 123I with a half-life of about 2 h. In the first reaction, the 123Xe is separated
from the target and then decays, while in the second reaction, the 123I is washed out of the
target after decay.
133
CHAPTER 4
TABLE4.2. CHARACTeRIZATION
RADIONUCLIDE PRODUCTION
Proton energy
(MeV)
OF
ACCELERATORS
Accelerated particles
Used for
<10
PET
1020
Usually p and d
PET
FOR
3040
40500
Usually p only
134
RADIONUCLIDE PRODUCTION
FIG.4.12. The cyclotron principle. A negative ion is injected into the gap between D-shaped
magnets (Dees) (1). An alternating electric field is applied across the gap, which causes
FIG.
4.12.
The cyclotron
principle.force
A negative
ion charge
is injected
dee
the
charge
to accelerate.
The magnetic
on a moving
forcesinto
it to the
bendgap
intobetween
a
shaped magnets
Anincreasing
alternating
electric
fieldapplied
is applied
across
which
semicircular
path of(1).
ever
radius
(2). The
electric
fieldthe
is gap,
reversed
in causes the
chargeeach
to accelerate.
Theparticle
magnetic
force
on aso moving
charge forces
it to bend into a
direction
time the charged
reaches
the gap,
that it is continuously
accelerated,
semicircular
of (3).
ever increasing radius (2). The applied electric field is reversed in
until
finally beingpath
ejected
direction each time the charged particle reaches the gap, so that it is continuously
accelerated, until finally being ejected (3).
CHAPTER 4
the electrodes called the acceleration gap. If a voltage is applied between the
electrodes, the ions will experience the potential gradient when traversing the
gap between the electrodes. If the voltage polarity is switched at the correct rate,
the ions will be continuously accelerated when crossing the gap, thus resulting
in an increase in the ions energy and velocity. As their velocity increases, the
ions will move into a circular orbit of increasing radius. The time taken for the
ions to return to the gap is independent of their radius in accelerators <30MeV.
For the cyclotron to operate correctly, it is necessary for the frequency of the
electric field across the Dees to be the same as the frequency of the circulating
ions, so that the polarity changes upon each traversal of the ions across the Dees.
In commercial accelerators, with high beam currents of several
milliamperes, it is usual to have an internal target for radionuclide production
located inside the chamber. In accelerators with lower beam currents <100 A,
such as those dedicated for PET hospital facilities, it is more common to extract
the beam onto an external target system. The modes of extraction depend upon
whether positive or negative ions are accelerated. Extraction of positive ions is
made by using a deflector that applies a static electric field which acts upon the
particles when in the outer orbits. Some beam current is invariably lost in the
process and the deflector often becomes quite radioactive.
Modern proton/deuterium accelerators usually accelerate negative ions
that are more easily extracted. In these systems, a thin carbon foil is used
that will strip away the two orbit electrons. As a consequence, the particles
suddenly change from negative to positive charge and are effectively bent out
of the magnetic field with an almost 100% extraction efficiency and with little
activation.
The extracted beam can either be transported further in a beam optical
transport system or will hit a production target directly. The target is usually
separated from the vacuum by metallic foils that are strong enough to withstand
the pressure difference and the heat from the beam energy, as it is transferred
and absorbed by the foils. The reason why two foils are used is that the heat
produced by the beam passage has to be removed, which is facilitated by a flow
of helium gas between the foils. Helium is preferred as the cooling medium
since no induced activity will be produced in this gas.
4.3.2. Commercial production (low and high energy)
If the proton energy is >30MeV, the particles tend to be relativistic,
i.e. their mass and their cycle time in orbit increase. A constant frequency of the
accelerating electric field would cause the ions to come out of phase. This can
be compensated for either by increasing the magnetic field as a function of the
cyclotron radius (isochronic cyclotrons) or by decreasing the radiofrequency
136
RADIONUCLIDE PRODUCTION
FIG.4.13. Schematic image of an internal target. The target material is usually thin (a few
tenths of a micrometre) and evaporated on a thicker backing plate. The target ensemble is
water cooled on the back. An advantage is that the beam is spread out over a large area which
facilitates cooling.
CHAPTER 4
targets are to be preferred. Owing to the relatively low beam current, extraction is
not a problem. Since internal targets need to be taken in and out of the cyclotron
vacuum, they are not usually implemented in PET cyclotrons.
The importance of choosing the right reaction and target material is crucial
and is illustrated by the production of 18F. There are several nuclear reactions that
can be applied (Table4.3).
TABLE4.3. DIFFERENT NUCLEAR REACTIONS FOR THE PRODUCTION
OF 18F
20
Ne(d, )18F
The nascent 16F will be highly reactive. In the noble gas Ne, it will diffuse
and stick to the target walls; difficult to extract
21
Same as above; in addition, the abundance of 21Ne is low (0.27%) and needs
enrichment
19
The product and target are the same element; poor specific radioactivity
16
Cheap target but accelerators that can accelerate particles to 35MeV are
expensive and not common
Ne(p, )18F
F(p, d)18F
O(, d)18F
16
O(d, )18F
18
18
O(p, n) F
Not only the nuclear reaction is important, but also the chemical
composition of the target. To irradiate 18O as a gas would be the purest target
(only target nuclide present) but handling a highly enriched gas in addition to
the hot-atom chemistry is complicated. Still, for some applications, this might
be the best choice. To irradiate 18O as an oxide and a solid target is possible
but the process following irradiation to dissolve the target and to chemically
separate 18F is complex, has a low yield and other elements in the oxide could
potentially contribute unwanted radioactivity. Enriched 18O water is a target of
choice as 18O is the dominant nucleus and hydrogen does not contribute to any
unwanted radioactivity. There is usually no need for target separation as water
containing 18F can often be directly used in the labelling chemistry. The target
water can also, after being diluted with saline, be injected directly into patients,
e.g. 18F-fluoride for PET bone scans. Water targets will produce 18F-fluoride for
use in stereospecific nucleophilic substitutions. An alternative production route is
neon gas production, 20Ne(d, )18F. Adding 19F2 gas to the neon as a carrier yields
18 19
F F that can be used for electrophilic substitution. Adding carrier lowers the
specific radioactivity of the labelled product.
138
RADIONUCLIDE PRODUCTION
13
11
18
Nuclear reaction
14
16
15
Yield (GBq)
15
14
11
18
18
40
100
139
CHAPTER 4
As(p, n)75Se
75
As(p, 3n)73Se
75
As(p, 4n)72Se
75
The maximum cross-sections are found at about 10, 30 and 40MeV for the
(p, n), (p, 3n) and (p, 4n) reactions, respectively. Thus, it takes about 10MeV to
expel a nucleon, i.e. a proton of 50MeV can cover radionuclide productions that
involve the emission of about five nucleons. At low energy, there is a disturbing
production of 75Se and if excessively high proton energy is used, another
140
RADIONUCLIDE PRODUCTION
CHAPTER 4
142
RADIONUCLIDE PRODUCTION
liquid. In fact, most generators in nuclear medicine use ion exchange columns in
much the same way due to its simplicity of handling.
In generator systems, the daughter radionuclide is formed at the rate at
which the parent decays, PNP. It also decays at the same rate, DND, as the parent,
once a state of transient equilibrium has been reached. The equations that describe the
relationship between parent and daughter are provided in Chapter1.
Another generator of increasing importance is 68Ge, which has a half-life
of 271 d that produces a short lived positron emitter 68Ga (T1/2=68min). This is
produced as a +3 ion that can be tagged, using a chelating agent such as DOTA,
to small peptides, e.g. 68Ga-DOTATOC. Owing to the long half-life of the mother,
the generator can be operated for up to two years and can be eluted every 5 h. One
problem with such a long lived generator is keeping it sterile, and furthermore,
the ion exchange material is exposed to high radiation doses that may reduce the
elution efficiency and the quality of the product.
The 90Sr/90Y generator is used to produce the therapeutic radionuclide 90Y.
This generator is not distributed to hospitals but is operated in special laboratories
on account of radiation protection considerations associated with the long lived
parent. The daughter, 90Y, has a half-life of 2.3 d which is adequate for transport
of the eluted 90Y to distant hospitals.
81
Rb (4.5 h)/81mKr (13.5 s) for ventilation studies and 82Sr (25.5 d)/82Rb (75 s)
for cardiac PET studies are examples of other generators with special requirements
due to the extremely short half-life of the eluted product. Recently,
generator systems producing emitters for therapy have become available,
e.g. 225Ac (10 d)/213Bi (45.6min).
4.5. RADIOCHEMISTRY OF IRRADIATED TARGETS
During target irradiation, a few atoms of the wanted radionuclide are
produced within the bulk target material. The energy released in a nuclear
reaction is large relative to the electron binding energies and the radionuclide is,
therefore, usually born almost naked with no or few orbit electrons. This hot
atom will undergo chemical reactions depending on the target composition. In
a gas or liquid target, these hot atom reactions may even cause the activity to
be lost in covalent bonds to the target holder material. During irradiation, the
target is also heated and its structure and composition may change. A pressed
powder target may be sintered and become more ceramic, which makes it more
difficult to dissolve. The target may melt and the radioactivity may diffuse in the
target and even possibly evaporate. In designing a separation method, all of these
factors have to be considered. Fast, efficient and safe methods are required to
143
CHAPTER 4
separate the few picograms of radioactive product from the bulk target material
which is present in gram quantities.
Separation of the radionuclide already starts in the target as demonstrated
in the production of 11CO2. Carbon-11 is produced in a (p, ) reaction on
nitrogen gas. To enable the production of CO2, some trace amounts of oxygen
gas (0.10.5%) are added. However, at low beam currents, mainly CO will be
formed, since the target will not be heated. At high beam currents, the CO will
be oxidized to the chemical form CO2. The separation, made by letting the target
through a liquid nitrogen trap, is simple and efficient. By adding hydrogen gas
instead, the product will be CH4.
The skill in hot-atom chemistry is to obtain a suitable chemical form of
the radioactive product, especially when working with gas and liquid targets.
Solid targets are usually dissolved and chemically processed to obtain the wanted
chemical form for separation.
4.5.1. Carrier-free, carrier-added systems
The concept of specific activity a, i.e. the activity per mass of a preparation,
is essential in radiopharmacy. If 100% of the product contains radioactive atoms,
often called the theoretical a, then the relationship between the activity A in
becquerels and the number of radioactive atoms N is given by N=A/, where is
the decay constant (1/s). The decay constant can be calculated from the half-life
T1/2 in seconds as =ln(2)/T1/2.
The specific activity a expressed as activity per number of radioactive atoms
is then A/N==ln(2)/T1/2. For a short lived radionuclide, a will be relatively
large compared to a long lived isotope. For example, a for 11C (T1/2=20min) is
1.5108 times larger than for 14C (T1/2=5730 a).
The specific activity a expressed in this way is a theoretical value that is
rarely obtained in practical work. When producing 11C, the target gas and target
holder will contain stable carbon that will dilute the radioactive carbon as well
as compete in the labelling process afterwards. A more empirical way to define
a is to divide the activity by the total mass of the element under consideration.
This value for 11C will usually be a few thousand times lower than the theoretical
value, while the production of 14C can come closer to the theoretical a.
In the labelling process, a is usually expressed as the activity per number
of molecules (a sum of labelled and unlabelled molecules). Instead of using the
number of atoms or molecules, it is common to use the mole concept by dividing
N by Avogadros number (NA=6.0221023). A common unit for a is then
gigabecquerels per micromole.
If the radioactive atoms are produced and separated from the target without
any stable isotopes, the process is said to be carrier-free. If stable isotopes are
144
RADIONUCLIDE PRODUCTION
CHAPTER 4
pH and other separation conditions, and is then put on to a column containing the
ion exchange material. The optimal separation conditions would be that the small
mass of desired radioactivity but not the bulk target material sticks to the column.
The column can then be small, and after washing and change of pH, the desired
activity can be eluted in a small volume. Under other conditions, large amounts
of ion exchange material have to be used to prevent saturation of binding sites
and leakage of the target material. This also means that large liquid volumes have
to be used, implying poorer separation. The two techniques are often performed
together by using liquid extraction to reduce the target mass, after which ion
exchange is used to make the final separation.
Occasionally, thermal separation techniques may be applied, which have
the advantage that they do not destroy the target (important when expensive
enriched targets are used) and that they lend themselves to automation. As an
example of such dry methods, the thermal separation of 76Br (T1/2=16 h) is
described. The target is Cu276Se, a selenium compound that can withstand some
heat. The nuclear reaction used is 76Se(p, n)76Br.
The process is as follows:
(a) The target is placed in a tube and heated, under a stream of argon gas, to
evaporate the 76Br activity by dry distillation (Fig.4.16);
(b) A temperature gradient is applied to separate the deposition areas of 76Br and
traces of co-evaporated selenide in the tube by thermal chromatography;
(c) The 76Br activity deposited on the tube wall is dissolved in small amounts
of buffer or water.
Argon
FIG.4.16. A schematic description of the 76Br separation equipment: (1) furnace, (2) auxiliary
furnace, (3) irradiated target, (4) deposition area of selenium, (5) deposition area of 76Br,
(6) gas trap.
146
RADIONUCLIDE PRODUCTION
CHAPTER 4
Tc
18
111
In
81
18
124
135
117
99m
Tc
111
In
18
124
0.1
0.28
1.0
5.8
20
1.0
0.36
1.3
7.1
22
10.0
0.43
1.6
8.5
27
These contradictory conditions are usually handled by having a box in the box,
i.e. the pharmaceutical is processed in a closed facility at over pressure placed in
the hot-box having low pressure. The classical hot-box design, with manipulators
to manually process the radioactivity remotely, as seen in Fig.4.17, is gradually
being replaced by lead protected chambers housing an automatic chemistry
system or a chemical robot making the pharmaceutical computer controlled.
FIG. 4.17. Examples of modern hot-box designs (courtesy of Von Gahlen Nederland B.V.).
148
CHAPTER 5
STATISTICS FOR RADIATION MEASUREMENT
M.G. LTTER
Department of Medical Physics,
University of the Free State,
Bloemfontein, South Africa
5.1. SOURCES OF ERROR IN NUCLEAR MEDICINE MEASUREMENT
Measurement errors are of three general types: (i) blunders, (ii) systematic
errors or accuracy of measurements, and (iii) random errors or precision of
measurements.
Blunders produce grossly inaccurate results and experienced observers
easily detect their occurrence. Examples in radiation counting or measurements
include the incorrect setting of the energy window, counting heavily contaminated
samples, using contaminated detectors for imaging or counting, obtaining
measurements of high activities, resulting in count rates that lead to excessive
dead time effects, and selecting the wrong patient orientation during imaging.
Although some blunders can be detected as outliers or by duplicate samples and
measurements, blunders should be avoided by careful, meticulous and dedicated
work. This is especially important where results will determine the diagnosis or
treatment of patients.
Systematic errors produce results that differ consistently from the correct
results by some fixed amount. The same result may be obtained in repeated
measurements, but overestimating or underestimating the true value. Systematic
errors are said to influence the accuracy of measurements. Measurement results
having systematic errors will be inaccurate or biased. Examples of a systematic
error are:
When an incorrectly calibrated ionization chamber is used for measurement
of radiation dose.
When during thyroid uptake studies with 123I the count rate of the reference
standard results in dead time losses. The percentage of thyroid uptake will
be overestimated.
When in sample counting the geometry of samples and the position within
the detector are not the same as in the reference sample.
149
CHAPTER 5
When during blood volume measurements the tracer leaks out of the blood
compartment. The theory of the method assumes that the tracer will stay
in the blood compartment. The leaking of the tracer will consistently
overestimate the measured blood volume.
When in calculation of the ventricular ejection fraction during gated
blood pool studies the selected background counts underestimate the true
ventricular background counts, the ejection fraction will be consistently
underestimated.
Measurement results affected by systematic errors are not always easy to
detect, since the measurements may not be too different from the expected results.
Systematic errors can be detected by using reference standards. For example,
radionuclide standards calibrated at a reference laboratory should be used to
calibrate source calibrators to determine correction factors for each radionuclide
used for patient treatment and diagnosis.
Measurement results affected by systematic errors can differ from the true
value by a constant value and/or by a fraction. Using golden standard reference
values, a regression curve can be calculated. The regression curve can be used to
convert systematic errors to a more accurate value. For example, if the ejection
fraction is determined by a radionuclide gated study, it can be correlated with the
golden standard values.
Random errors are variations in results from one measurement to the next,
arising from actual random variation of the measured quantity itself, as well as
physical limitations of the measurement system.
Random error affects the reproducibility, precision or uncertainty in the
measurement. Random errors are always present when radiation measurements
are performed because the measured quantity, namely the radionuclide decay,
is a random varying quantity. The random error during radiation measurements
introduced by the measured quantity, that is the radionuclide decay, is
demonstrated in Fig.5.1. Figure5.1 shows the energy spectrum of a 57Co source
in a scattering medium and measured with a scintillation detector probe. The
energy spectrum represented by square markers is the measured energy spectrum
with random noise due to radionuclide decay. The solid line spectrum represents
the energy spectrum without random noise. The variation around the solid line of
the data points, represented by markers, is a result of random error introduced by
radionuclide decay.
The influence of the random error of the measurement system introduced
by the scintillation detector is also demonstrated in Fig.5.1. Cobalt-57 emits
photons of 122keV and with a perfect detection system all of the counts are
expected at 122keV. The measurements are, however, spread around 122keV
as a result of the random error introduced by the scintillation detector during the
150
57
CHAPTER 5
will determine the ability of the system to reject lower energy scattered photons
and improve image contrast.
FIG.5.2. The influence of random error as a result of radionuclide decay or counting statistics
is demonstrated for imaging. Technetium-99m posterior planar bone images (256256) using
a scintillation camera were acquired to total counts of 21, 87 and 748 kcounts.
152
x 1 + x 2 + ... + x N
(5.1)
N
N
x
i =1
(5.2)
153
CHAPTER 5
( x 1 x e ) 2 + ( x 2 x e ) 2 + ........ + ( x N x e ) 2
N 1
N
= N11
(x x )
i
(5.3)
i =1
e
(5.5)
x e (5.5)
(5.6)
(5.7)
F (x) = 1 (5.7)
x =0
154
Measurement counts
Measured counts
Average
Measurement number
Counts
FIG.5.3. One thousand measurements were made with a scintillation counter. (a)The graph
shows the variations observed for the first 50 measurements. (b)The graph (red bars) shows
the histogram of the relative frequency distribution for the measurements as well as the
expected calculated frequency distribution.
155
CHAPTER 5
i =1
measurements in each bin in the frequency distribution function. The sum of the
measurements for each bin is obtained by multiplying the value of the bin i and
the number of occurrences of the value xi.
N
xe =
i =1
N
N
x F (x )
(5.8)
(5.8)
i =1
N
N 1
(x x ) F (x )
i
(5.9)
(5.9)
i =1
The standard deviation and the fractional standard deviation are given by
Eqs(5.4) and (5.5).
The frequency distribution provides information and insight on the
precision of the experimental sample mean and of a single measurement.
Figure5.3 demonstrates the distribution of counting measurements around the
156
true mean ( x t ) . The value of the true mean is not known but the experimental
sample mean ( x e ) can be used as an estimate of the true mean ( x t ) :
( x t ) ( x e ) (5.10)
(5.10)
x t 0.675
50.0
x t 1.000
68.3
x t 1.640
90.0
x t 2.000
95.0
x t 3.000
99.7
157
CHAPTER 5
two results are possible: the trial is either a success or not. It is further assumed
that the probability of success p is constant for all trials.
To show how these conditions apply in real situations, Table5.2 gives
four separate examples. The third example gives the basis for counting nuclear
radiation events. In this case, a trial consists of observing a given radioactive
nucleus for a period of time t. The number of trials n is equivalent to the number
of nuclei in the sample under observation, and the measurement consists of
counting those nuclei that undergo decay. We identify the probability of success
as p. For radioactive decay:
p = (1 e t ) (5.11)
Definition of success
Probability of success p
Tossing a coin
Heads
1/2
Rolling a die
A six
1/6
(1 e lt )
(1 e x )
1/5
158
distribution for the three models. The distributions were generated by using a
Microsoft Office Excel spreadsheet.
5.3.1.1. Binomial distribution
This is the most general model and is widely applicable to all constant
p processes (Fig.5.4). Binomial distribution is rarely used in nuclear decay
applications. One example in which the binomial distribution must be used is
when a radionuclide with a very short half-life is counted with a high counting
efficiency.
x
P(x)
P(x)
x
p
P(x)
P(x)
FIG.5.4. Probability distribution models for successful event probability p=0.4 and
p=0.0001 for x = 10 and x =100, respectively.
CHAPTER 5
n!
p x (1 p) nx (5.12)
(5.12)
(n x)! x !
P( x) = 1
(5.13)
(5.13)
x =0
x=
xP(x)
x =0
160
(5.14)
(5.14)
The sample variance for a set of experimental data has been defined by
Eq.(5.9). By analogy, the predicted variance 2 is given by:
2 =
(x x )
P( x) (5.16)
(5.16)
x =0
(5.19)
(5.19)
np(1 p)
(1 p)
=
np
np
F =
x (1 p)
=
x
(1 p)
(5.20)
(5.20)
x
CHAPTER 5
Secondly, the light photons then eject x electrons from the photomultiplier
photocathode. Thirdly, these electrons are then multiplied to form a pulse that
can be further processed. For each ray that interacts with the scintillator, the
number of light photons n, electrons ejected x and multiplication vary statistically
during the detection of the different rays. This variation determines the energy
resolution of the system.
In this example, the second stage is illustrated, that is the ejection of
electrons from the photocathode. The variation or the standard deviation and
fractional standard deviation for the number of electrons x that are ejected can
be calculated using the binomial distribution as is given by Eqs(5.19) and (5.20).
The typical values for a scintillation counter are as follows. It is assumed
that the 142keV rays emitted by 99mTc are being counted. It is further assumed
that it uses 100 eV to generate a light photon in the scintillation crystal when a
ray interacts with the crystal. Therefore, if all of the energy of a single 142keV
photon is absorbed, n=142 000/100=1420 light photons will be emitted. It is
assumed that these light photons fall on the photocathode of the PMT to generate
x electrons for each ray absorbed. It is further assumed that five light photons
are required to eject one electron.
For the binomial distribution, the probability of a light photon ejecting
an electron is p=1/5 and the number of trials n will be the number of light
photons generated for each ray. This will be 1420. Equation (5.15) can be used
to calculate the predicted mean number of electrons ejected for each ray:
x = pn =
1
1420 = 284 electrons
5
and
F =
(1 p)
(1 1 / 5)
(5.20)
=
= 0.053 (5.21)
x
284
photopeak (Fig.5.1) and the energy resolution of the system (Sections 5.7.1 and
6.4).
5.3.3. Poisson distribution
Many binary processes can be characterized by a low probability of
success for each individual trial. This includes nuclear counting and imaging
applications in which large numbers of radionuclides make up the sample or
number of trials, but a relatively small fraction of these give rise to recorded
counts. Similarly, during imaging, many rays are emitted by the administered
imaging radionuclide, for every one that interacts with the tissue. In addition,
during nuclear counting, many rays strike the detector for every single recorded
interaction.
Under these conditions, the approximation that the probability p is small
(p 1) will hold and some mathematical simplifications can be applied to the
binomial distribution. The binomial distribution reduces to the form:
P( x) =
( pn) x e pn
x!
(5.21)(5.22)
The relation pn = x holds for this distribution as well as for the binomial
distribution:
P( x) =
( x ) x e x
x!
(5.22)
(5.23)
P(x) = 1 (5.24)
x =0
163
CHAPTER 5
The mean value or first moment for the Poisson distribution is calculated
by inserting the Poisson distribution (Eq.(5.22)) into the equation to calculate the
mean for a frequency distribution (Eq.(5.8)):
x=
xP(x) = pn
x =0
(5.24)
(5.25)
This is the same result as was obtained for the binomial distribution.
The predicted variance of the Poisson distribution differs from that of the
binomial distribution and can be derived from Eqs(5.9) and (5.22):
2 =
(x x )
P( x) = pn (5.26)
x =0
The predicted standard deviation is the square root of the predicted variance
(Eq.(5.4)):
= 2 = x (5.28)
1
1
=
= (5.29)
x
x
The fractional standard deviation is the inverse of the square root of the
mean value of the distribution.
Equations (5.28) and (5.29) are important equations and frequently find
application in nuclear detection and imaging.
164
2 x
e
P( x) =
2 x
(5.30)
P(x) = 1 (5.31)
x =0
The predicted standard deviation is the square root of the predicted variance
(Eq.(5.4)):
= 2 = x (5.33)
1
1
=
x
(5.34)
The fractional standard deviation is the inverse of the square root of the
mean value of the distribution.
165
CHAPTER 5
1
e 2
P( x) =
2
(5.35)
P(x)
166
Relative response
Distance (mm)
FIG.5.6. Line source response curve obtained from a scintillation camera fitted to a normal
distribution model. Image resolution is measured as the distance of the full width at half
maximum (FWHM) of the percentage response. The standard deviation (SD) is the half width
at a percentage response of 60.65%.
All continuous normal distributions have the property that between the
mean and one standard deviation 68% is included on either side, between the
mean and two standard deviations 95%, and between the mean and three standard
deviations 99.7% of the total area under the curve.
5.3.4.2. Continuous normal distribution: applications in medical physics
The normal distribution is often used in radionuclide measurements and
imaging to fit to experimental data. In this case, the equation is modified as
follows:
P( x) = 100e
2
1 x x
(5.36)
167
CHAPTER 5
was 23.6mm. The relation for a normal distribution between the FWHM and
standard deviation can be derived by setting P(x)=50 and solving Eq.(5.36):
FWHM=2.355 (5.37)
For the imaging system used in Fig.5.6, the standard deviation =10mm.
The value of the response P(x) is 60.65% at a distance of = x (Eq.(5.36)).
The value of the standard deviation can, therefore, also be obtained from the
measured percentage response curve by finding the x value at a percentage
response of 60.65% (Fig.5.6).
In radionuclide energy spectroscopy, the photopeak distribution can be
fitted to a normal distribution (Fig.5.1). The energy resolution of scintillation
detectors is expressed as the FWHM of the photopeak distribution divided by
the photopeak energy E. The energy spectrum in medical physics applications
is measured in kiloelectronvolts or megaelectronvolts. The fractional energy
resolution RE is:
RE =
FWHM 2.355
=
E
E
(5.38)
168
only information available, it is assumed that the mean of the distribution is equal
to the single measurement:
xx
(5.39)
Therefore, the best estimate of the deviation from the true mean, which
should typify a single measurement x, is given by:
s = x (5.41)
169
CHAPTER 5
x0.67
93.3106.7
50
x1.00
90.0110.0
68
x1.64
83.6116.4
90
x2.00
80.0120.0
95
x2.58
74.2125.8
99
x3.00
70.0130.0
99.7
Interval
(relative )
x is included (%)
x
1
=
=
(5.42)
x
x
x
Thus, the recorded number of counts or the value of the single measurement
x completely determines the relative precision. The relative precision decreases
as the number of counts increases. Therefore, to achieve a required relative
precision, aminimum number of counts must be accumulated.
The following example illustrates the important relation between the
relative precision and the number of counts recorded. If 100 counts are recorded,
the relative standard deviation is 10%. If 10 000 counts are recorded, the relative
standard deviation reduces to 1%. This example demonstrates the importance of
acquiring enough counts to meet the required precision.
It is easier to achieve the required precision when samples in counting tubes
are measured than when in vivo measurements on patients are performed. The
170
single measurement from a high count rate radioactive sample in a counting tube
will be obtained in a short time. However, if a low activity sample is measured,
the measurement time will have to be increased to achieve the desired precision.
The desired precision can be conveniently obtained by using automatic sample
counters. These counters can be set to stop counting after a preset time or preset
counts have been reached. By choosing the preset count option, the desired
precision can be achieved for each sample.
The acquisition time of in vivo measurements using collimated detector
probes, such as thyroid iodine uptake studies or imaging studies, can often not
be increased to achieve the desired precision as a result of patient movement. In
these single measurements, high sensitivity radiation detectors or a higher, but
acceptable radioactive dose, can be selected.
The precision of a single measurement is very important during radionuclide
imaging. If the number of counts acquired in a picture element or pixel is low, a
low precision is obtained. There will then be a wide range of fluctuations between
adjacent pixels. As a result of the poor quality of the images, it would only be
possible to identify large defect volumes or defects with a high contrast. To detect
a defect, the measured counts from the defect must lie outside the range of the
background measurement plus orminus two standard deviations (x2). During
imaging, the number of counts measured in a target volume will be determined
by the acquisition time, activity within the target volume and the sensitivity of
the measuring equipment. The sensitivity of imaging equipment can be increased
by increasing the FWHM spatial resolution. There is a trade-off between single
sample counting precision and the spatial resolution of the imaging device to
obtain images that would provide the maximum diagnostic value during visual
interpretation of the images by nuclear medicine physicians.
Counting statistics are also very important during image quantification
such as measuring renal function, left ventricular ejection fraction and tumour
uptake. During quantification, the accumulated counts by an organ or within a
target volume have to be accurately determined. In quantification studies, the
background activity, attenuation and scatter contributions have to be corrected.
These procedures further reduce the precision of quantification.
5.4.3. Caution on the use of the estimate of the precision of a
single measurement in sample counting and imaging
All conclusions are based on the measurement of a counted number of
success (number of heads in coin tossing). In nuclear measurements or imaging,
the estimate of the precision of a single measurement by using = x can only
be applied if x represents a counted number of success, that is the number of
events recorded in a given observation time.
171
CHAPTER 5
172
(5.43)
(5.44)
( x 1 ) 2 + ( x 2 ) 2 + ( x 3 ) 2 ...
x 1 x 2 x 3 ...
(5.46)
N 1 + N 2 + N 3 ...
N 1 N 2 N 3 ...
(5.49)
173
CHAPTER 5
N2 N1
N2 N1
N1
500
22.4
0.0447
500
22.4
0.0447
N2
10
3.2
0.3162
450
21.2
0.0471
N1 N2
490
22.6
0.0461
50
30.8
0.6164
N1 + N2
510
22.6
0.0443
950
30.8
0.0324
174
A x x
(5.52)
=
Ax
x
F =
and
1
(5.55)
N
F =
Similarly, if:
x
(5.56)
B
xD =
x
(5.57)
B
and
F =
x B x
=
(5.58)
B x
x
N
(5.59)
B
M =
N
(5.60)
B
175
CHAPTER 5
and
1
(5.61)
N
F =
x P = x1
x2
x3
x2
x3
x2
x3
x2
x3
... ) = F ( x 1 ) 2 + F ( x 2 ) 2 + F ( x 3 ) 2 ... ( x 1
x2
x3
... ) (5.65)
N2
N3
176
N2
N3
... ...) 2 =
1
1
1
(5.67)
+
+
+ ... ...
N1 N2 N3
N2
N3
... ...) =
1
1
1
+
+
+ ... ... (5.68)
N1 N2 N3
N2
N3
... ...) =
1
1
1
+
+
+ ... ... ( N 1
N1 N2 N3
N2
N3
The results show that the standard deviation for the sum of all counts is the
same as if the measurement had been carried out by performing a single count,
extending over the period represented by all of the counts.
5.6.1.2. Mean value of multiple independent counts
If the mean value N of the n independent counts referred to in the previous
section is calculated, then:
N=
Ns
(5.72)
n
177
CHAPTER 5
N
n
Ns
n
nN
N
(5.73)
=
n
n
by:
R=
N
(5.74)
t
In the above equation, it is assumed that the time t is measured with a very
small uncertainty, so that t can be considered a constant. The calculation of the
uncertainty associated with the counting rate is an application of the propagation
of errors, multiplying by a constant (Eq.(5.60)):
R =
x
N
R
=
=
(5.75)
t
t
t
x
N
1
=
=
(5.76)
tR
tR
tR
178
1
= 0.001 = 0.1%
10 000 100
Although the counts acquired for sample 1 and the count rate of sample 2
were numerically the same, the uncertainties associated with the measurements
were very different. When calculations on counts are performed, it must be
determined whether the value is a single value or whether it is a value that has
been obtained by calculation.
5.6.3. Effects of background counts
Background counts are those counts that do not originate from the sample
or target volume or are unwanted counts such as scatter. The background
counts during sample counting consist of electronic noise, detection of cosmic
rays, natural radioactivity in the detector, and down scatter radioactivity from
non-target radionuclides in the sample. During in vivo measurements, such
as measurement of thyroid iodine uptake or left ventricular ejection fraction,
radiation from non-target tissue will also contribute to background. Scattered
179
CHAPTER 5
radiation from target as well as non-target tissue will influence quantification and
will be included in the background. To obtain the true net counts, the background
is subtracted from the gross counts accumulated. The uncertainty of the true
target counts can be calculated using Eqs(5.48) and (5.49), and the uncertainty of
true count rates can be calculated using Eqs(5.75) and (5.76).
If the background count is Nb, and the gross counts of the sample and
background is Ng, then the net sample count Ns is:
N s = N g N b (5.77)
Ng + Nb
Ng Nb
(5.79)
If the background count rate is Rb, acquired in time tb, and the gross count
rate of the sample and background is Rg, acquired in time tg, then the net sample
count rate Rs is:
Rs = Rg R b (5.80)
The standard deviation for a count rate Rs is given by Eqs (5.45) and(5.75):
(Rs ) =
Rg
tg
Rb
(5.81)
tb
The fractional standard deviation for a count rate Rs is given by Eqs (5.46)
and(5.76):
Rg
F (Rs ) =
tg
Rb
tb
Rg R b
(5.82)
If the same counting time t is used for both sample and background
measurement:
(R s ) =
180
Rg + R b
t
(5.83)
and
F (R s ) =
Rg + R b
t (R g R b )
(5.84)
484 + 441
= 0.7073
484 441
P ( N g N b ) = 70.7%
181
CHAPTER 5
Counts counts
Ng Nb
P (%)
Ng
484
22.0
0.0455
4.5
Ng
484
22.0
0.0455
4.5
Nb
441
21.0
0.0476
4.8
Nb
169
13.0
0.0769
7.7
Ns
43
30.4
0.7073
70.7
Ns
315
25.6
0.0811
8.1
3 (Ns)
91
Counts
Not
significant
3 (Ns)
77
Counts Significant
5.6.3.2. Example: error in net target count rate following background correction
A planar image of the liver is acquired for the detection of tumours.
Two equal sized ROIs, ROI1 and ROI2, were selected to cover the areas of
the two potential tumours. The gross count rate Rg in ROI1 was 484 counts
perminute (cpm) (Table5.6) and in ROI2 484 cpm. The background count rates
Rb selected over normal tissue of the same area as for the gross counts were 441
and 169 cpm. The acquisition time of the image was 2min. How to calculate the
uncertainties in the tumor volume net counts is presented.
The difference and error associated with the difference (Eq.(5.80)
Eq.(5.82)) when Rg Rb are:
Rg R b = 484 441 = 43 cpm
(R g R b ) =
484 441
+
= 21.5 cpm
2
2
484 441
+
2
2 = 0.5001
F (R g R b ) =
484 441
P (Rg R b ) = 50.0%
Count rate
(cpm)
(cpm)
P (%)
Source
Rg Rb
Count rate
F (cpm)
(cpm)
P (%)
Rg
484
15.6
0.0321
3.2
Rg
484
15.6
0.0321
3.2
Rb
441
14.8
0.0337
3.4
Rb
169
9.2
0.0544
5.4
Rs
43
21.5
0.5001
50.0
Rs
315
18.1
0.0574
5.7
Minutes
Minutes
3 (Rs)
65
Not
significant
3 (Rs)
54
Significant
there is a 0.3% chance that the difference is caused by random error and this
difference is considered significant.
The examples in the previous section to determine whether tumours
were present following a liver scan illustrate the application to determine the
significance of the difference between two counts (Table5.5). The net counts
and uncertainty over two tumour areas were calculated. Do the counts over the
tumour areas significantly differ from the normal background area?
For the difference for Ng Nb (Table5.5) to be significant, Eq.(5.85) must
apply.
183
CHAPTER 5
The difference of 43 cpm was less than the norm of 3(N1 N2) and the
difference is, therefore, not significant. It can be concluded with a smaller than
0.3% chance that there is not a tumour present.
An example when Ng Nb is also given in Table5.5. In this case, the
315 cpm counts difference was larger than 3(N1 N2) of 77 cpm. The difference
in this case is significant. It can be concluded with a smaller than 0.3% chance
that there is a tumour present.
The significance of differences between the counting rates of samples can
also be calculated. Two counting rates, R1 and R2, are acquired using counting
times t1 and t2.
The uncertainty associated with the difference is given by applying
Eqs(5.45) and (5.75):
( R1 R 2 ) =
R1 R 2
(5.86)
+
t1
t2
184
theminimum net counts Nm that can be detected with 0.3% confidence is given
by:
Nm=N1 N2=3(N1 N2) (5.88)
or
Nm=Ng Nb=3(Ng Nb) (5.89)
Solving this equation for Ng will give theminimum detectable gross
counts Nm:
(2 N b + 9) + 72 N b + 81
Ng =
(5. 90)
Nm
(5.92)
tS
where
S is the sensitivity of the detection system usually expressed as count rate per
becquerel;
and t is the time that the background was counted.
5.6.5.1. Example: calculation ofminimum activity that can be detected
A detector is to be used to detect 131I in the thyroid of radiation workers.
The background count was 441 counts measured over a period of 5min. The
acquisition time for the thyroid was also 5min. The sensitivity of the counter was
0.1 counts s1 Bq1. What is theminimum activity that can be detected?
From Eq.(5.90):
Ng =
(2 N b + 9) + 72 N b + 81
2
(2 441 + 9) + 72 441 + 81
= 535 counts
2
185
CHAPTER 5
(535 441)
= 3.124 Bq
5 60 0.1
36 R b 81 36 R b
9
+ 2+
2 R b + +
t g
tg
tb
tg
Rg =
2
(5.94)
Rb Rb
+
(5.95)
tg
tb
Rm
(5.96)
S
where S is the sensitivity of the detection system usually expressed as count rate
per becquerel.
5.6.5.2. Example 2: calculation ofminimum activity that can be detected
A detector is to be used to detect 131I in the thyroid of radiation workers.
The background count rate was 441 cpm measured over a period of 5min and the
thyroid count rate was measured over 1min. The sensitivity of the counter was
0.1 counts s1 Bq1. What is theminimum activity that can be detected?
From Eq.5.94:
1
1
5
12
Rg =
= 515 cpm
2
186
It should be noted that Rg Rb=74 cpm and 3(Rg Rb)=74 cpm as was
specified in Eq.(5.93). Theminimum detectable radioactivity is:
Am =
(515 441)
= 12.28 Bq
0.1 60
F1( N S1 ) =
N g1 N b1
and
F2 ( N S2 ) =
N g2 + N b2
N g2 N b2
The fractional uncertainties for the net sample counts obtained with the two
systems are, therefore:
N g1 + N b1
N g1 N b1
F1( N S1 )
(5.97)
=
F2 ( N S2 )
N g2 + N b2
N g2 N b2
If
F1( N S1 )
< 1, then system 1 is statistically the preferred system. If
F2 ( N S2 )
F1( N S1 )
> 1 , then system 2 is preferred.
F2 ( N S2 )
187
CHAPTER 5
Systems can be compared using the count rate and fractional standard
deviation for the count rate RS (Eq.(5.82)). To compare systems 1 and 2, the ratio
of the fractional standard deviation is calculated:
Rg1
t g1
R b1
t b1
Rg1 R b1
F1(RS1 )
(5.98)
=
F2 (RS2 )
Rg2 R b2
+
t g2
t b2
Rg2 R b2
It should be noted that Eqs(5.98) and (5.99) are the same except that in
Eq.(5.99) counts are substituted by counting rates.
Equation (5.99) can also be used in planar imaging. Different collimators
can be evaluated by comparing counts from a target region to a non-target or
background region. However, in imaging, spatial resolution is also important and
must be considered.
5.6.7. Estimating required counting times
It is supposed that it is desired to determine the net sample or target
counting rate Rs to within a certain fractional uncertainty F(Rs). It is supposed
further that the approximate gross sample Rga and background Rba counting rates
are known from preliminary measurements. If a counting time t is to be used
for both the sample or target and the background counting measurements, then
the time required to achieve the desired level of statistical reliability is given by
Eq.(5.84):
188
t=
Rga + R ba
2
[ F (Rs )](Rga R ba ) 2
(900 + 100)
1000
=
= 0.625 min
2
2
(0.05) (900 100)
(0.05) 2 (800) 2
The time for both the thyroid and background counts is 0.625min, resulting
in a total time of 1.25min.
5.6.8. Calculating uncertainties in the measurement
of plasma volume in patients
A plasma volume (PV) measurement is required on a patient and the
uncertainty in the PV measurement is to be calculated. The PV is measured by
using the dilution principle. A labelled plasma sample of a known volume is
prepared for injection into the patient. A standard sample with the same activity
and volume is also prepared for counting. The standard sample is diluted before
a sample is counted. Tenminutes after injection of the sample, a blood sample
is obtained, the plasma separated from the blood and the blood sample counted.
The PV is calculated using the following equation:
PV =
Rs
VD (5.100)
Rp
where
Net count rate per millilitre of standard sample Rs=Rs+b Rb;
Rb is the count rate of background;
Rs+b is the gross count rate per millilitre of standard sample;
Net count rate per millilitre of plasma sample Rp=Rp+b Rb;
189
CHAPTER 5
Value
Uncertainty in values
Unit
Symbol
Calculation
F(%)
10
min
Rs+b
3200
cpm
(Rs+b)
Rs+b
t
17.89 0.559
Rb
200
cpm
(Rb)
Rb
t
4.472 2.236
Rs
3000
cpm
(Rs)
((Rs+b )) 2 + ((R b )) 2
18.44 0.615
Rp+b
1200
cpm
(Rp+b)
Rp
1000
cpm
(Rp)
Rs
Rp
mL
(V)
Rs
V
Rp
15
mL
200
PV=
Rs
VD
Rp
(Rs/Rp)
R
s
V
R p
R p+b
10.95 0.913
Rs
Rp
(R ) 2 (R p )
R + R
s
p
0.040 1.333
0.150 3.000
R
s
V
R p
(R / R ) 2 (V ) 2
s
p
+ V
Rs / R p
(D)
0.492 3.283
6.000 3.000
3000
mL
(PV)
((R / R )V)
(D) 2
s
p
PV
+
(Rs / R p )V
D
11.83 1.183
2
133
4.447
191
CHAPTER 5
1
. It is assumed that the production of light photons follows
M 1
1
a Poisson distribution and, therefore, F2 ( x) = . The fractional variance of
x
F2 (M ) =
1 p
:
p
1 1 1 p
1
1
(5.103)
+
+
x x p
xp (M 1)
192
1
(5.105)
1 p
1
+
x 1 +
p
p(M 1)
dN
= lN
dt
where
N
= P(0)
Probability of
event during dt
r dt
(5.106)
The first factor on the right hand side follows directly from the earlier
discussion of the Poisson distribution. We seek the possibility that no events will
be recorded over an interval of length t for which the average number of recorded
events should be rt. From Eq.(5.23):
P(0) =
(rt ) 0 e rt
0!
193
CHAPTER 5
P(0) = e rt (5.107)
P1(t) is now the distribution function for intervals between adjacent random
events. Figure5.7 shows the simple exponential shape of the distribution.
It should be noted that the most probable distribution is zero. The average
interval length is calculated by applying Eq.(5.8):
t =
=
P (t ) d t
tP1(t ) dt
1
te rt dt
=
e
rt
dt
1
(5.109)
r
dead period, although not recorded, still create another fixed dead time on the
system following the lost event. The recorded rate of events m is identical to the
rate of occurrences of time intervals between true events, which exceed . The
probability of intervals larger than can be obtained by integrating Eq.(5.108):
P2 (t ) dt =
P1(t ) dt = e r (5.110)
(5.111)
195
CHAPTER 6
BASIC RADIATION DETECTORS
C.W.E. VAN EIJK
Faculty of Applied Sciences,
Delft University of Technology,
Delft, Netherlands
6.1. INTRODUCTION
6.1.1. Radiation detectors complexity and relevance
Radiation detectors are of paramount importance in nuclear medicine. The
detectors provide a wide range of information including the radiation dose of
a laboratory worker and the positron emission tomography (PET) image of a
patient. Consequently, detectors with strongly differing specifications are used.
In this chapter, general aspects of detectors are discussed.
6.1.2. Interaction mechanisms, signal formation and detector type
A radiation detector is a sensor that upon interaction with radiation
produces a signal that can preferably be processed electronically to give the
requested information. The interaction mechanisms for X rays and rays are the
photoelectric effect, Compton scattering and pair formation, where the relative
importance depends on the radiation energy and the interaction medium. These
processes result in the production of energetic electrons which eventually
transfer their energy to the interaction medium by ionization and excitation.
Charged particles, such as particles, transfer their energy directly by ionization
and excitation. In all cases, the ionization results either in the production of
charge carriers, viz. electrons and ions in a gaseous detection medium, and
electrons and holes in a semiconductor detector material, or in the emission of
light quanta in a scintillator. These processes represent the three major groups
of radiation detectors, i.e. gas filled, semiconductor and scintillation detectors.
In the former two cases, a signal, charge or current is obtained from the
detector as a consequence of the motion of charge in the applied electric field
(Figs6.1(a)and (b)). In the scintillation detector, light emission is observed by
means of a light sensor that produces observable charge or current (Fig.6.1(c)). A
detailed discussion is presented in Sections 6.26.4.
196
Electrons
Electrons
Light quanta
Electron
from
interaction
Ions
Gas
Holes
Scintillator
Light sensor
FIG.6.1. Principle of operation of (a) a gas filled detector, i.e. an ionization chamber;
(b)a semiconductor detector, i.e. a silicon detector; and (c)a scintillation detector. The former
two detectors are capacitors. The motion of charge results in an observable signal. The light of
a scintillation detector is usually detected by a photomultiplier tube.
CHAPTER 6
former is proportional to Zeff34, where is the density and Zeff is the effective
atomic number of the compound. Compton scattering is almost independent of Z;
it is just proportional to . The density of a gas filled detector is three orders of
magnitude smaller than that of a solid state detector. Thus, solid state detectors
are very important in nuclear medicine. At 511keV, even the highest possible
and Zeff are needed. Gas filled detectors are used in dosimetry.
6.1.4.2. Energy, time and position resolution
Energy, time and position resolution depend on a number of factors. These
are different depending on the physical property considered and the type of
detector; yet, there is one aspect in common. Resolution is strongly coupled to
the statistics of the number of information carriers. For radiation energy E, this
number is given by N=E/W in which W is the mean energy needed to produce
an information carrier. Typical W values are shown in Table6.1. As the smallest
number of information carriers in the process of signal formation is determinative,
for scintillation the effect of the light sensor is also shown. From the W values,
it can be seen that semiconductor detectors produce the largest number of
information carriers and inorganic scintillators coupled to a photomultiplier tube
(PMT) the smallest. If a ray energy spectrum is measured, the observed energy
resolution is defined as the width of a line at half height (FWHM: full width at
half maximum) E divided by its energy E. With N =E/W, and N being the
corresponding FWHM:
E N
FW
=
= 2.35
(6.1)
E
N
E
where
N is 2.35 for a Gaussian distribution;
2 is FN for the variance;
and F is the Fano factor. For gas filled detectors, F=0.050.20, for
semiconductors F 0.12. For a scintillator, F=1.
Using the corresponding F and W values, it can be seen from Eq.(6.1)
that the energy resolution of a semiconductor is ~16 times higher than that
of an inorganic scintillator PMT. In this discussion, other contributions to the
energy resolution were neglected, viz. from electronic noise in the case of the
semiconductor detector and from scintillator and PMT related effects in the
198
ENERGIES
W TO
Detector type
Gas filled (electronion)
Semiconductor (electronhole)
Inorganic scintillator (light quantum)
Inorganic scintillator + photomultiplier tube (electron)
Inorganic scintillator + silicon diode (electronhole pair)
PRODUCE
INFORMATION
W (eV)
30
3
25
100
35
CHAPTER 6
the scintillation light, and (ii) the time needed to process the signals and to handle
the data. For a better understanding, the concept of dead time is introduced.
It is theminimum time separation between interactions (true events) at which
these are counted separately. Non-paralysable and paralysable dead time are
considered. In the former case, if within a period of time after a true event a
second true event occurs, it cannot be observed. If the second event occurs at a
time t > , it will be counted. The dead period is of fixed length . Defining true
event rate T (number per unit time) and counting rate R, the fraction of time the
system is dead is given by R and the rate of loss of events is TR. Considering
that the latter is also T R, the non-paralysable case can be derived:
R=
T
(6.2)
1+T
If in the paralysable model a second event occurs at t > after the first
event, it will be counted. If a second event occurs at t < after the first event, it
will not be counted. However, in the paralysable case, if t < , the second event
will extend the dead time with a period from the moment of its interaction. If a
third event occurs at t > after the first event but within a period of time after
the second event, it will not be counted either. It will add another period of . The
dead time is not of fixed length. It can become much larger than the basic period
and in this case it is referred to as extendable dead time. Only if an event
occurs at time > after the previous event will it be counted. In this case, the
counting rate is the rate of occurrences of time intervals > between events, for
which the following can be derived:
R =Te T (6.3)
Figure6.2 demonstrates the relation between R and T for the two cases
above and for the case of =0, i.e. R=T.
6.2. GAS FILLED DETECTORS
6.2.1. Basic principles
The mode of operation of a gas filled detector depends strongly on the
applied voltage. In Fig.6.3(a), the signal amplitude is shown as a function of the
voltage V. If upon interaction with radiation an energetic electron ploughs through
the gas, the secondary electrons produced will tend to drift to the anode and the
ions to the cathode (seeFig.6.1(a)). If the voltage is relatively low, the electric
200
1
Non-paralysable
1
1
Paralysable
1
FIG.6.2. Counting rate R as a function of true event rate T in the absence of dead time
(R=T), in the non-paralysable case and in the paralysable case.
field E is too weak to efficiently separate the negative and positive charges. A
number of them will recombine. The full signal is not observed this is in the
recombination region. Increasing the voltage, more and more electrons and ions
escape from recombination. The region of full ionization is now reached. For
heavier charged particles and at higher rates, this will happen at a higher voltage.
The signal will become constant over a wide voltage range. Typical operating
voltages of an ionization chamber are in the range of 5001000 V.
For the discussion of operation at stronger electric fields, cylindrical
detector geometry with a thin anode wire in the centre and a metal cylinder as
cathode (seeFig.6.3(b)) is introduced. The electric field E(r) is proportional to the
applied voltageV and inversely proportional to the radius r. At a certain voltage
VT, the threshold voltage, the electric field near the anode wire is so strong that a
drifting electron will gain enough energy to ionize a gas atom in a collision. The
proportional region is entered. If the voltage is further increased, the ionization
zone will expand and an avalanche and significant gas amplification are obtained.
At normal temperature and pressure, the threshold electric field ET 106 V/m. For
parallel plate geometry with a depth of ~1cm, this would imply that VT 10kV,
which is not practicable. Due to the r1 dependence, in the cylindrical geometry,
manageable voltages can be applied for proportional operation (13kV). As long
as the gas gain M is not too high (M 104), it is independent of the deposited
energy. This is referred to as the proportional region and proportional counter.
If the voltage is further increased, space charge effects will start to reduce the
effective electric field and, consequently, affect the gain. This process will start
at a lower voltage for the higher primary ionization density events. The limited
proportionality region is entered. With further increasing voltage, the pulse
201
CHAPTER 6
height will eventually become independent of the deposited energy. This is the
GeigerMller region.
Limited
proportional
region
(a)
(b)
Cathode
Recombination
Geiger
Mller
region
Anode
wire
2E
E
Radiation
Full
ionization
Proportional
region
VT
Cylinder geometry
High voltage
FIG.6.3. (a) Pulse height as a function of applied high voltage for gas filled detectors;
(b)cylindrical detector geometry.
is used as an example. The diode structure is realized by means of semiconductorelectronics technology. Silicon doped with electron-donor impurities, called
n-type silicon, can be used to reduce the number of holes. Electrons are the
majority charge carriers. Silicon with electron-acceptor impurities is called p-type
silicon; the number of free electrons is strongly reduced. The majority charge
carriers are the holes. When a piece of n-type material is brought into contact
with a piece of p-type material, a junction diode is formed. At the junction,
a space charge zone results, called a depletion region, due to diffusion of the
majority charge carriers. When a positive voltage is applied on the n-type silicon
side with respect to the p-type side, the diode is reverse-biased and the thickness
of the depletion layer is increased. If the voltage is high enough, the silicon will
be fully depleted. There are no free charge carriers left and there is virtually no
current flowing. Only a small current will remain, the leakage or dark current.
To make a diode, n-type silicon is the starting material and a narrow zone
is doped with impurities to make a p+n junction, as indicated at the bottom of
Fig.6.1(b). The notation p+ refers to a high doping concentration. For further
reduction of the leakage current, high purity silicon and a blocking contact are
used, i.e. an n+ doping at the n-type side, also indicated in Fig.6.1(b). If the
leakage current is still problematic, the temperature can be decreased. The use
of high purity semiconductor material is not only important for reducing the
leakage current. Energy levels in the gap may trap charge carriers resulting from
the interaction with radiation and the energy resolution of a detector would be
reduced.
The above described approach is not the only way to make a detector.
It is possible to start with p-type material and make an n+p junction diode.
Furthermore, it is possible to apply a combination of surface oxidation and
deposition of a thin metal layer. Such contacts are called surface barrier contacts.
If the thickness of a detector is <1mm, it is even possible to use intrinsic
silicon, symbol i, with p+ and n+ blocking contacts on opposite sides (pin
configuration). For thicker silicon detectors, yet another method is used. In
slightly p-type intrinsic silicon, impurities are compensated for by introducing
interstitial Li ions that act as electron donors. The Li ions can be drifted over
distances of ~10mm. Furthermore, if the bandgap of a semiconductor is large
enough, just metal contacts will suffice.
Important parameters are the mobilities, e and h, and the lifetimes, e and
h, of electrons and holes, respectively. The drift velocity e,h in an electric field E
is given by the product of the mobility and the field strength. Consequently, for
a given detector size and electric field, the mobilities provide the drift times of
the charge carriers and the signal formation times. From the mobilities and the
lifetimes, information on the probability that the charge carriers will arrive at the
203
CHAPTER 6
collecting electrodes is obtained. The path length a charge carrier can travel in its
lifetime is given by:
e,h e,h = e,h e,h E (6.4)
If this is not significantly longer than the detector depth, charge carriers
will be lost.
6.3.2. Semiconductor detectors
Some properties of semiconductor detector materials of relevance for
nuclear medicine, viz. the density , effective atomic number for photoelectric
effect Zeff, Egap and W value, the mobilities e,h and the products of the mobilities,
and the lifetimes of the charge carriers, are presented in Table6.2.
Silicon is primarily of interest for (position sensitive) detection of
low energy X rays, particles and light quanta. The latter are discussed in
Section 6.4.2.2.
TABLE6.2. PROPERTIES
MATERIALS
Si (300 K)
(g/cm3)
Zeff
2.3
14
Si (77 K)
OF
SEMICONDUCTOR
Egap
(eV)
Wa
(eV)
1.12
Mobility
(cm2/Vs)
DETECTOR
Mobilitylifetime
(cm2/V)
ee
hh
3.6
1350
480
>1
~1
1.16
3.8
21000
11000
>1
>1
Ge (77 K)
5.3
32
0.72
3.0
36000
42000
>1
>1
CdTe (300 K)
6.2
50
1.44
4.7
1100
80
3103
2104
Cd0.8Zn0.2Te
(CZT-300 K)
~6
50
1.52.2
~5
1350
120
4103
1104
HgI2 (300 K)
6.4
69
2.13
4.2
70
5103
3105
CHAPTER 6
X ray and ray detection. Another group is formed by organic scintillators, viz.
crystals, plastics and liquids, which have a low density and atomic number, and
are primarily of interest for counting of particles. In some inorganic scintillator
materials, metastable states (traps) are created that may live from milliseconds to
months. These materials are called storage phosphors. Scintillators and storage
phosphors are discussed later in this section. However, as light detection is of
paramount importance, light sensors are introduced first.
6.4.2. Light sensors
6.4.2.1. Photomultiplier tubes
The schematic of a scintillation detector is shown in Fig.6.4(a). A
scintillation crystal is coupled to a PMT. The inside of the entrance window of
the evacuated glass envelope is covered with a photocathode which converts
photons into electrons. The photocathode consists of a thin layer of alkali
materials with very low work functions, e.g. bialkali K2CsSb, multialkali
Na2KSb:Cs or a negative electron affinity (NEA) material such as GaAs:Cs,O.
The conversion efficiency of the photocathode , called quantum efficiency,
is strongly wavelength dependent (seeFig.6.5). At 400 nm, =2540%. The
emitted electrons are focused onto the first dynode by means of an electrode
structure. The applied voltage is in the range of 200500 V, and the collection
efficiency 95%. Typical dynode materials are BeOCu, Cs3Sb and GaP:Cs.
The latter is an NEA material. If an electron hits the dynode, electrons are
released by secondary emission. These electrons are focused onto the next
dynode and secondary electrons are emitted, etc. The number of dynodes n is in
the range of 812. The signal is obtained from the last electrode, the anode. At
an inter-dynode voltage of ~100 V, the multiplication factor per dynode 5. In
general, a higher multiplication factor is applied for the first dynode, e.g. 1 10,
to improve the single-electron pulse resolution, and consequently the signal to
noise ratio. Starting with N photons in the scintillator and assuming full light
collection on the photocathode, the number of electrons Nel at the anode is given
by:
N el = 1 n1 N (6.5)
(a)
Window
Window PMT
photocathode
(b)
Window PMT
Microchannel plates
Reflector
Light
quantum
Canning
Scintillator
Optical coupling
Focusing electrodes
Photocathode anode
207
CHAPTER 6
Silicon
photodiode
UV
sensitive
Photomultiplier
tube
Wavelength (nm)
called the break-down voltage Vbr. For gains of M 105 106, an APD can be
used at voltages >Vbr, where it operates in Geiger mode. The pulses are equal in
magnitude. Signal quenching techniques have to be used. Circular and square
APDs are available with areas in the sub-square millimetre to ~1cm2 range.
Various pixelated APDs are available, e.g. of 4 pixels8 pixels at a pitch of
~2.5mm and a fill factor 40%.
In a hybrid photomultiplier tube (HPMT), the photoelectrons are accelerated
in an electric field resulting from a voltage difference of ~10kV, applied between
the photocathode and a silicon diode which is placed inside the vacuum enclosure.
The diode is relatively small, thus reducing the capacitance and, consequently,
the noise level. As the production of 1 eh pair will cost 3.6 eV, ~3000 eh pairs
are produced in the diode per impinging electron. Consequently, the signals from
one or more photons can be observed well separated. Equipped with an APD, an
overall gain of ~105 is possible. HPMTs have been made with pixelated diodes.
Window diameters are up to ~70mm.
The silicon photomultiplier (SiPM) is an array of tiny APDs that operate
in Geiger mode. The dimensions are in the range of ~20 m 20 m to
100 m 100 m. Consequently, the number of APDs per square millimetre
can vary from 2500 to 100. The fill factor varies from <30% for the smallest
dimensions to ~80% for the largest. The signals of all of the APDs are summed.
With gains of M 105106, the signal from a single photon can be easily observed.
By setting a threshold above the one electron response, spontaneous Geiger
pulses can be eliminated. The time spread of SiPM signals is very small, <100 ps.
Excellent time resolutions have been reported. Arrays of 2 pixels2 pixels
and 4 pixels4 pixels of 3mm3mm, each at a pitch of 4mm, have been
commercially produced. A 16 pixel16 pixel array of 50mm50mm has
recently been introduced. Blue sensitive SiPMs have a photon detection
efficiency of ~25% at 400 nm, including a 60% fill factor.
6.4.3. Scintillator materials
6.4.3.1. Inorganic scintillators
In an inorganic scintillator, the bandgap has to be relatively large to avoid
thermal excitation and to allow scintillation photons to travel in the material
without absorption (Egap 4 eV). Consequently, inorganic scintillators are based
on ionic-crystal materials. Three steps for the production of scintillation photons
are considered (Fig.6.6): (i) interaction of radiation with the bulk material
and thermalization of the resulting electrons and holes on the energy scale,
electrons end up at the bottom of the conduction band and holes at the top of
the valence band; (ii) transport of these charge carriers to intrinsic or dopant
209
CHAPTER 6
luminescence centres; (iii) interaction with these centres, i.e. excitation, relaxation
and scintillation. Using this model, the number of photons Nph produced under
absorption of a ray with energy E is:
N ph =
E gap
SQ (6.6)
The first term on the right is the number of eh pairs at the bandgap edge.
Typically, 2.5. S and Q are the efficiencies of steps (ii) and (iii).
Thermalization
Conduction band
Q
Relaxation
Trap
Excitation
Radiationless
recombination
Valence band
Excitation
Scintillation
Luminescence
centre
FIG.6.6. Energy diagram showing the main process steps in an inorganic scintillator.
210
transitions. Using Eq.(6.6) and the proper values of Egap, only LaBr3:Ce appears
to have S Q 1.
TABLE6.3. SPECIFICATIONS OF SOME INORGANIC SCINTILLATORS
Scintillator
(ns)
NaI:Tla
3.67
51
29
17
410
41000
6.5
CsI:Tl
4.51
54
23
21
540
64000
BaF2
4.88
220
310
1500
10000
0.8
600
Bi3Ge4O12
(BGO)
7.1
75
10.4
40
480
8900
300
LaCl3:Cea
3.86
49.5
28
15
350
49000
3.3
25
LaBr3:Ce
5.07
46.9
22
13
380
67000
2.8
16
YAlO3:Ce
(YAP)
5.5
33.6
21
4.2
350
21000
4.4
25
Lu0.8Y0.2Al:Ce
(LuYAP)
8.3
65
11
30
365
11000
Gd2SiO5:Ce
(GSO)
6.7
59
14.1
25
440
12500
60
Lu2SiO5:Ce,Ca
(LSO)
7.4
66
11.4
32
420
~36000
3643
Lu1.8Y0.2SiO5:
Ce (LYSO)
7.1
420
30000
40
23
12
230
18
Hygroscopic.
211
CHAPTER 6
a light yield of ~10 000 photons/MeV of absorbed ray energy and the decay
times are about 2 ns. The scintillators are usually specified by a commercial code.
6.4.3.3. Storage phosphors thermoluminescence and
optically stimulated luminescence
A storage phosphor is a material analogous to an inorganic scintillator.
The difference is that a significant part of the interaction energy is stored
in long-living traps. These are the memory bits of a storage phosphor. The
lifetime must be long enough for the application considered. Readout is done
either by thermal stimulation (heating) or by optical stimulation. An electron is
lifted from the trap into the conduction band and transported to a luminescence
centre. The intensity of the luminescence is recorded. These processes have been
coined thermoluminescence and optically or photon stimulated luminescence.
Storage phosphors have been used for dosimetry for more than fifty years
(thermoluminescence dosimeter). In particular, LiF:Mg,Ti (commercial name
TLD-100) is widely used. The sensitivity is in the range of ~50 Gy to ~1 Gy. A
newer and more sensitive material is LiF:Mg,Cu,P (GR-200), with a sensitivity in
the 0.2 Gy to 1 Gy range. Recently, an optically stimulated luminescent material
has been introduced, Al2O3:C. The sensitivity is in the range of 0.3 Gy to 30 Gy.
Storage phosphors are also used in radiography.
Bibliography
INTERNATIONAL CONFERENCE ON INORGANIC SCINTILLATORS AND THEIR
APPLICATIONS, SCINT 2007, IEEE Trans. Nucl. Sci. 55 (2008) 10291564.
SCINT 2009, IEEE Trans. Nucl. Sci. 57 (2010) 11571520.
INTERNATIONAL WORKSHOP ON ROOM-TEMPERATURE SEMICONDUCTOR
X- AND GAMMA-RAY DETECTORS (15th workshop), IEEE Trans. Nucl. Sci. 54 (2007)
761880.
(16th workshop), IEEE Trans. Nucl. Sci. 56 (2009) 16971884.
KNOLL, G.F., Radiation Detection and Measurement, 4th edn, John Wiley & Sons, New York
(2010).
LEO, W.R., Techniques for Nuclear and Particle Physics Experiments, 2nd edn, Springer,
Berlin (1994).
PROCEEDINGS OF NUCLEAR SCIENCE SYMPOSIUM AND MEDICAL IMAGING
CONFERENCE (annually), IEEE Trans. Nucl. Sci. (recent volumes).
212
RODNYI, P.A., Physical Processes in Inorganic Scintillators, CRC Press, Boca Raton, FL
(1997).
SCHLESINGER, T.E., JAMES, R.B. (Eds), Semiconductors for Room Temperature Nuclear
Detector Applications, Academic Press, San Diego, CA (1995).
TAVERNIER, S., GEKTIN, A., GRINYOV, B., MOSES, W.M. (Eds), Radiation Detectors for
Medical Applications, Springer, Dordrecht, Netherlands (2006).
213
CHAPTER 7
ELECTRONICS RELATED TO NUCLEAR
MEDICINE IMAGING DEVICES
R.J. OTT
Joint Department of Physics,
Royal Marsden Hospital
and Institute of Cancer Research,
Surrey
R. STEPHENSON
Rutherford Appleton Laboratory,
Oxfordshire
United Kingdom
7.1. INTRODUCTION
Nuclear medicine imaging is generally based on the detection of X rays and
rays emitted by radionuclides injected into a patient. In the previous chapter,
the methods used to detect these photons were described, based most commonly
on a scintillation counter although there are imaging devices that use either gas
filled ionization detectors or semiconductors.
Whatever device is used, nuclear medicine images are produced from a
very limited number of photons, due mainly to the level of radioactivity that can
be safely injected into a patient. Hence, nuclear medicine images are usually
made from many orders of magnitude fewer photons than X ray computed
tomography (CT) images, for example. However, as the information produced
is essentially functional in nature compared to the anatomical detail of CT, the
apparently poorer image quality is overcome by the nature of the information
produced.
The low levels of photons detected in nuclear medicine means that
photon counting can be performed. Here each photon is detected and analysed
individually, which is especially valuable, for example, in enabling scattered
photons to be rejected. This is in contrast to X ray imaging where images are
produced by integrating the flux entering the detectors. Photon counting,
however, places a heavy burden on the electronics used for nuclear medicine
imaging in terms of electronic noise and stability.
214
This chapter will discuss how the signals produced in the primary photon
detection process can be converted into pulses providing spatial, energy and
timing information, and how this information is used to produce both qualitative
and quantitative images.
7.2. PRIMARY RADIATION DETECTION PROCESSES
As described in Chapter6, the methods used for the detection of X ray and
ray photons fall into three categories, namely the scintillation counter, gas filled
detectors and semiconductors. Each of these techniques provides several detector
types and requires different electronics to produce and utilize the signals.
7.2.1. Scintillation counters
Figure7.1 shows a block diagram of a scintillation counter using a phosphor
and photomultiplier combination, together with the basic electronics required
to produce analogue and digital signals used to create an image. Table6.3
shows that the phosphors used in nuclear medicine can produce 150067000
optical photons per megaelectronvolt of energy deposited in the crystal and the
light emission time can vary from less than 1 ns up to ~1 s. Additionally, the
amplification of the optical signal by a photomultiplier can vary by an order of
magnitude or more depending on the photocathode quantum efficiency and the
number of dynodes. From this, it can be seen that the pulses produced by the
scintillation counter can vary substantially in both shape and amplitude, and that
the electronic devices used to manipulate these signals must be flexible enough
to account for these variations.
Electronic
base and
preamplifier
analogue to
digital
converter
CHAPTER 7
Incoming ray
Cathode wire plane
Ionization
Amplifier
FIG.7.2. Schematic of a two-plane multiwire proportional chamber detecting a ray.
for a scintillation counter. However, the signals will still require some form of
amplification to produce useful analogue or digital information.
7.3. IMAGING DETECTORS
Having briefly discussed the production of signals by the three major
ionizing radiation detection processes, it is necessary to understand how these
methods are used to produce images in nuclear medicine. The two main imaging
devices used are the gamma camera and the positron camera. For completeness,
autoradiography imaging of tissue samples containing radiotracers is also
described.
Generally, both gamma camera and positron camera systems use
scintillation counters as the primary radiation detector because the stopping
power for X rays and rays is good in the high density scintillating crystals
used. However, there have been some examples of cameras using MWPCs and
semiconductors, and a brief description is provided here.
7.3.1. The gamma camera
Invented by Hal Anger, the gamma camera is usually based on the use of
a single large area (e.g. 50cm40cm of NaI(Tl)) phosphor coupled to up to a
hundred PMTs. The camera (Fig.7.3) can detect rays emitted by a radiotracer
distributed in the body. The lead collimator placed in front of the scintillation
counter selects the direction of the rays entering the device and allows an image
of the biodistribution of the tracer to be made.
217
CHAPTER 7
rates achieved in both cases, factors of 1020 are not unusual. In addition, the
pulses from a positron camera must be carefully shaped to allow accurate timing
information to be made in coincidence circuits to ensure that both annihilation
photons from a single annihilation event are detected. Time jitter in the pulses
will affect this process, allowing random photons from multiple nucleic decays
to be included in the data acquisition. In addition, the recent introduction of
so-called time of flight cameras requires very accurate (sub-nanosecond) timing
to be made between the two annihilation photons.
D1
Amplifier
Energy 1
ADC
Digital 1
Coincidence
circuit
ADC
D2
Amplifier
Trigger
Digital 2
Energy 2
CHAPTER 7
delay line is measured by the time to digital converters (TDCs) and provides
the positional information the accuracy of this information depends on the
intrinsic properties of the delay lines and the spread of the signal at the wire
planes, and in this system is ~4mm. Pulses produced after the gas amplification
region are used to provide the fast coincidence trigger to read the data into the
computer a timing resolution of ~23 ns is readily achievable.
Coinc
Gate
Data enable
CFD
CFD
TDC
TDC
Pulse height
B Gamp
CAC
B
Gamp
G
C
A
Coinc
CFD
TDC
FIG.7.5. Schematic of the pulse production and readout system for the PETRRA positron
camera wire planes are shown as dotted lines.
could be used to determine the position of any interaction in the sensor. However,
the modest stopping power of the material coupled with the need for a cryostat to
reduce the intrinsic noise of the detector made this design impractical.
More practical systems based on room temperature operation of CZT
have been developed by GE (the Alcyone system) and Spectrum Dynamics (the
DSPECT system). In the case of the latter system designed specifically for cardiac
imaging, ~1000 individual small CZT crystals are coupled to a tungsten collimator
providing an intrinsic spatial resolution of 3.54.2mm full width at half maximum
and a sensitivity of approximately eight times that of a scintillator based camera
most of the increases in sensitivity are due to the collimator design.
Silicon photodiodes have been used as an alternative to PMTs for both
gamma camera and positron camera designs. Here, APDs have been coupled to
phosphors and because of their small size, a truly digital camera design is possible.
In practice, due to the cost of APDs, only small systems have been developed. The
recent development of silicon photomultipliers promises further improvements in
nuclear medicine imaging.
7.3.5. The autoradiography imager
Autoradiography is based on the use of radioactive labels to determine the
microscopic distribution of pharmaceuticals in tissues excised from humans
or animals. A major use in humans is to detect areas of malignancy or tissue
malfunction. In animals, the method is used to track the uptake of drugs, for instance.
The pharmaceuticals are usually labelled with long lived radiotracers that have a
short range emission or low energy X ray or ray emission. Typical examples of
tracers used are 3H, 14C, 32P, 33P and 125I. Autoradiography imagers are required to
detect the emissions with high efficiency as the levels of uptake in tissue samples
are often very low. The gold standard for tissue radiography is film emulsion which
produces a high resolution (m) image of tissues, although these detectors have
low efficiency for detecting the radiation involved. Images can take days to weeks
to produce and this can be a severe limitation if diagnostic information is desired.
Digital autoradiography systems based on the use of thin phosphors, gas filled
detectors and silicon wafers can be 50100 times more efficient although the spatial
resolution is limited to typically a few tens of micrometres.
A phosphor based imager may use a very thin (50100 m) material such as
GADOX or CsI(Tl) coupled to a high resolution sensor, such as a microchannel
plate, a charge coupled device or a complementary metal oxide semiconductor
APD (Fig.7.6).
221
D2
CHAPTER 7
igure 7.4
igure 7.6
igure 7.7
igure 7.8
Energy 2
Amplifier
Beta particle
Segmented CsI(Tl)
CMOS APD
Read out
Read out
C
The limitations of these devices are mostly the pixel size of the sensor
and the noise in the sensor. Amplifiers with low noise are required and room
temperature operation desirable. Such a device can have a resolution of <50 m.
C
As discussed
above, the primary signals from the radiation detectors are
generally small and need to be amplified without the injection of high levels
R
of noise into the signal readout system.
A preamplifier is needed prior to the
C
main amplification process if the signals
from the detector are very small, for
example, when aRPMT has insufficient dynodes to provide a large output pulse.
Preamplifiers are usually mounted immediately next to or as part of the output
stage of the detector tominimize the noise produced prior to full amplification.
The main amplifier can then be used to maximize and shape the signal (via
current and/or voltage gain) without over-amplifying noise.
7.4.1. Typical amplifier
The output current from a PMT is directly proportional to the amount
of light received from the phosphor. Although the PMT amplifies the electron
signal produced at the photocathode by a large factor, the current produced at the
anode is still very small. Amplifiers for PMTs are specially designed to transform
this current into voltage which can be directly input into an analogue to digital
222
D1
Amplifier
ADC
Beta particle
Q
I (t ) d t =
(7.1)
C
The configuration has negative feedback that increases the effective input
Read out
capacitance by a factor equal to the gain of the amplifier. This ensures that almost
all of the current flows into the amplifier even though the PMT and wiring can
Figure
7.6 capacitance. The feedback also reduces the output impedance, so
have
significant
that the amplifier acts as a voltage source.
Vout
I
+
FIG.7.7. A directly coupled charge amplifier producing a voltage output by integrating the
Figure 7.7
current produced in the photomultiplier tube.
The shape of the output pulse is important for the measurement of both
analogue and digital information,
C and is defined by the output stage of the
amplifier (Fig.7.8). The amplified signal is first passed through a CR (high pass)
R the low frequencies,
filter which improves the signal to noise ratio by attenuating
R
Figure 7.8
223
Segmen
CMOS
Read ou
Vout
I
+
CHAPTER 7
Figure 7.7
which contain a lot of noise and very little signal. The decay time of the pulse is
also shortened by this filter.
C
R
R
Figure 7.8
Before the output of the amplifier, the pulse passes through an RC (low pass)
filter which improves the signal to noise ratio by attenuating high frequencies,
which contain excessive noise. The pulse rise time is lengthened by this filter.
The combined effect produces a unipolar output pulse and with suitably chosen
values, has an optimal signal to noise ratio.
7.4.2. Properties of amplifiers
The most important properties of an amplifier are gain, bandwidth, linearity,
dynamic range, slew rate, rise time, ringing, overshoot, stability and noise:
The gain of an amplifier is defined as the log ratio of the output power/
voltage Pout to the input power/voltage Pin and is usually measured in
decibels:
gain [dB]=10log(Pout/Pin) (7.2)
Gain in charge amplifiers is often expressed in millivolts output per
picocoulomb input charge.
The bandwidth of an amplifier is defined as the range of frequencies that
the amplifier operates and is often determined by frequencies at which the
power output drops to half its normal value (the 3 dB point). This is an
important feature of an amplifier attached to a high count rate detector as
required in positron emission tomography (PET) imaging, for example.
Amplifier linearity is limited when the gain of the amplifier is increased
to saturation point, resulting in output pulse distortion. Clearly, this is
important if the dynamic range of the pulses produced by the detector is
large. Dynamic range is defined as the ratio of the smallest and largest
224
useful output signals, with the former limited by the noise in the system and
the latter by amplifier distortion.
Rise time is often defined as the time taken for the output pulse to increase
from 1090% of its maximum and is a measure of the speed or frequency
response of the amplifier.
Slew rate is the maximum rate of change of the shape of the output
signal for the whole range of input signals, usually expressed in volts per
microsecond. This is very important if timing information is needed from
the detector as a poor slew rate will distort the bigger signals, making
them unsuitable for fast timing, as in PET. For PET applications, amplifier
rise times of the order of a few nanoseconds are needed, with no shape
distortion resulting from slew rate even on the biggest pulses.
Ringing is a problem when an amplifier produces a pulse that either
oscillates before reaching its maximum value or where the tail oscillates
before reaching the baseline. This can be a serious problem if timing
information is required or if the oscillations produce multiple triggers of the
output electronics downstream of the amplifier.
Stability is clearly an important parameter for an amplifier if the output
signals are to be used for either analogue or digital purposes. It is essential
that the amplifier output does not vary significantly for a given input
signal as the processes used to determine positional, energy and timing
information rely on the output for a given input being constant both in
offset, amplitude and shape. Factors that affect stability are numerous but
prime examples are variations in temperature, supply voltage and count rate
as well as long term drift.
Noise is a major impediment to the production of images using any of the
devices discussed above. Examples include thermal noise caused by the
thermal movement of charge carriers in resistors, shot noise caused by a
random variation in the number of charge carriers and flicker or 1/f noise
caused by the trapping or collisions of charge carriers in the structure of
the silicon used in the electronics. These sources combine to produce a
variation in the output signal of the combined detector/electronics system
that can affect the quality of images produced. The root mean square noise
of a system is defined as the square root of the absolute value of the sum of
the squares of the noise variances.
For a system using PMTs, the dominant noise component is that associated
with the number of photoelectrons produced at the photocathode as this is
amplified by the gain of the PMT dynode chain and subsequent electronics. For
a gas filled detector, the equivalent is the number of primary electrons produced
225
CHAPTER 7
at the first stage of the ionization process and for a silicon detector the important
parameter is the initial number of eh pairs produced.
7.5. SIGNAL PROCESSING
Once an amplified signal has been produced, it is then used to generate
both analogue and digital information about the detected event. The analogue
signal will relate to the energy deposited in the detector and is used, for example,
tominimize the number of scattered rays accepted into the image production
process. The digital signal is used to produce spatial and timing information.
7.5.1. Analogue signal utilization
The analogue information is generated by sending the pulse from the
amplifier into a single or multichannel pulse height analyser. In a gamma camera,
several energy windows are available, whereby the pulse height or charge is
compared with preset values that correspond to the known energies of the rays
being detected. In the simplest case for imaging a single energy ray emission,
two thresholds can be set to reject pulses that are above or below these values
(Fig.7.9).
Threshold 1
Threshold 2
Input pulse
Output pulse
FIG.7.9. A single channel pulse height analyser an output pulse is produced when the input
pulse is between the two thresholds. This system also functions as a single channel analogue
to digital converter.
When a radiotracer that emits several different energy rays is being used,
multiple thresholds can sort the information into several channels or images.
7.5.2. Signal digitization
Analogue signals are converted into digital signals that are subsequently
used to provide spatial and temporal information about each detected event.
226
Analogue
pulse
input
Ramp
generator
input
Enable
Counter
Clock
input
FIG.7.10. Ramp-based single slope converter system for digitizing analogue pulses.
number of pulses generated corresponds to the amplitude of the signal. The faster
the clock, the higher the accuracy of the digitization achieved. This is a relatively
simple and low cost solution but is slow as the time taken to digitize the pulse is
2N clock cycles. The pulse sequence producing the digital output is shown
227
CHAPTER 7
1st bit
2nd bit
Analogue
input
pulse
3rd bit
Nth bit
Reference level 2N 1
FIG.7.12. Schematic of a FLASH analogue to digital converter producing N bits of digital
data.
Analogue
Input 1
Analogue
Input 2
FIG.7.13. The use of constant fraction discriminators (CFDs) to generate a fast coincidence
output the insert shows how the trigger point is set by a constant fraction of the pulse height.
The trigger points for the timing occur at a constant fraction of the shaped
analogue signal, so that the timing is not affected by the different signal pulse
heights. In this example, CFD1 generates a gate with a width set to more than
twice the measured timing resolution of the detectors. If the pulse from CFD2
falls within this gate, a coincidence (AND) output is generated; otherwise, the
event is rejected.
An alternative method of determining the timing from a pulse is to use the
zero crossing technique (Fig.7.14). In this method, the pulse is differentiated to
produce a bipolar pulse the timing is taken from the point where the pulse
crosses a reference line that is usually tied to ground hence, the zero crossing.
Again it is important that the pulse shapes are carefully controlled tominimize
jitter in the timing information.
FIG.7.14. A timing signal generated from the zero crossing point of a differentiated signal.
If the timing information is to be stored, then the two pulses from the
CFDs can be input into a TDC. In this case, the first pulse starts and the second
one stops a clock the number of pulses generated is proportional to the time
229
CHAPTER 7
difference between the pulses. If a fast clock is used, excellent timing information
is available for use in time of flight calculations, for example.
7.6. OTHER ELECTRONICS REQUIRED BY IMAGING SYSTEMS
7.6.1. Power supplies
Low voltage supplies are used to provide the power input for semiconductor
systems where a few tens of volts are sufficient. In some cases, batteries may
provide enough power but the need to maintain a constant current and voltage
makes this a modest solution. Usually, a low voltage supply converts mains AC
power, typically 240V (or 110 V), into DC voltages of, for example, 15V
and 5V to provide the line voltages for transistors and diodes. This is done by
combining a transformer, which reduces the voltage, and a rectifier, typically a
diode which allows only one half of the AC signal to pass this is half-wave
rectification (Fig.7.15).
Full-wave rectification is achieved by using a diode bridge that allows both
halves of the AC signal to be used, with one half being inverted. The oscillations
are removed using a filter, usually capacitors. The smoothest DC output is
provided by using a three phase AC input. The output is usually passed through
a voltage regulator to stabilize the voltage and remove the last traces of ripple.
Half-wave rectification
AC input
Full-wave rectification
FIG.7.15. Conversion of AC into DC using a transformer and rectifier system.
PMTs and MWPCs require power supplies that can provide voltages up to
several kilovolts. For example, each pair of dynodes in a PMT usually has at least
230
100V between them and even more may be used between the photocathode and
the first dynode to maximize the early gain of the PMT. These power supplies
usually have an oscillator and step up transformer operating at high frequency
to provide the drive plus a voltage multiplier consisting of a stack of diodes and
capacitors.
7.6.2. Uninterruptible power supplies
This form of support is needed for an imaging system to overcome loss of
power during periods of mains supply interruption. In this case, the output power
comes from the storage battery via some form of inverter. While the mains power
is available, it charges the battery as well as providing power to the imaging
device. If the mains supply is interrupted, the battery continues to provide support
to ensure that the imaging system can continue to be used. The size of the battery
support system depends directly on how long backup is needed or how long, for
example, it takes the operator to save data and shut down the system. As in most
imaging environments, the mains is replaced by a generator supply. The period
of support is often short but usually several hours of supply is available from an
uninterruptible power supply.
7.6.3. Oscilloscopes
In order to optimize the use of pulse generating equipment, an oscilloscope
is essential. This type of device allows the pulses from the detectors to be
displayed at various stages of generation prior to their use in image production.
For example, the pulse sequences illustrated above can be displayed on an
oscilloscope and this allows the equipment to be adjusted to provide the optimum
analogue and digital pulse sequence, shape and size.
An oscilloscope allows the pulses to be displayed on a 2D display, usually
with the vertical axis representing voltage (pulse height) and the horizontal
axis time. In addition to the amplitude of the pulses, the oscilloscope display
can be used to analyse the frequency of the signals being studied and also to
detect any pulse distortion such as oscillation or saturation. In an advanced form,
the oscilloscope can function as a spectrum analyser over a wide range of pulse
frequencies.
The original oscilloscopes were based on a cathode ray tube to display
the pulses but more modern systems use liquid crystal displays connected to
ADCs and other signal processing electronics. To the user, the oscilloscope will
present as a box with a display screen, input connectors and various controls.
The input from equipment can be done either directly using connecting cables/
sockets or through probes, often into a high impedance (e.g. 1M) or, for high
231
CHAPTER 7
232
BIBLIOGRAPHY
HOROWITZ, P., HILL, W., The Art of Electronics, Cambridge University Press (1982).
INIEWSKI, K. (Ed.), Medical Imaging: Principles, Detectors, and Electronics, John Wiley and
Sons, Hoboken, NJ (2009).
TURCHETTA, R., Electronics signal processing for medical imaging, Phys. Med. Imaging
Appl. 240 (2007) 273276.
WEBB, S., The Physics of Medical Imaging, Hilger (1988).
233
CHAPTER 8
GENERIC PERFORMANCE MEASURES
M.E. DAUBE-WITHERSPOON
Department of Radiology,
University of Pennsylvania,
Philadelphia, Pennsylvania,
United States of America
8.1. INTRINSIC AND EXTRINSIC MEASURES
8.1.1. Generic nuclear medicine imagers
The generic nuclear medicine imager, whether a gamma camera, single
photon emission computed tomography (SPECT) system or positron emission
tomography (PET) scanner, comprises several main components: a detection
system, a form of collimation to select rays at specific angles, electronics and a
computing system to create the map of the radiotracer distribution. This section
discusses these components in more detail.
The first stage of a generic nuclear medicine imager is the detection of the
rays emitted by the radionuclide. In the case of PET, the radiation of interest are
the 511keV annihilation photons that result from the interaction of the positron
emitted by the radionuclide with an electron in the tissue. For general nuclear
medicine and SPECT, there is one or sometimes more than one ray of interest,
with energies in the range of <100 to >400keV.
The rays are detected when they interact and deposit energy in the
crystal(s) of the imaging system. There are two main types of detector: crystals
that give off light that can be converted to an electrical signal when the ray
interacts (scintillators) and semiconductors, crystals that generate an electrical
signal directly when the ray deposits energy in the crystal. Scintillation detectors
include NaI(Tl), bismuth germanate (BGO) and lutetium oxyorthosilicate (LSO);
semiconductor detectors used in nuclear medicine imagers include cadmium zinc
telluride (CZT). Radiation detectors are described in more detail in Chapter6.
When a ray interacts in a scintillation crystal, it deposits some or all
of its energy. This energy is re-emitted in the form of light with a wavelength
dependent on the crystal material but not on the energy of the ray. The more
energy deposited in the crystal, the greater the intensity of the light emitted.
Scintillation crystals are coupled to photomultiplier tubes (PMTs), which serve
234
235
CHAPTER 8
related to the amount of activity at each location), corrections must be applied for
these unwanted events as part of the reconstruction process.
Performance measures aim to test one or more of the components, including
both hardware and software, of a nuclear medicine imager.
8.1.2. Intrinsic performance
There are two general classes of measurements of scanner performance:
intrinsic and extrinsic. Intrinsic measurements reflect the performance of a
sub-part of the imager under ideal conditions. For example, measurements
made on a gamma camera without a collimator will describe the best possible
performance of the detector without the degrading effects of a collimator,
although the collimator is essential for clinical imaging. For a PET scanner,
intrinsic performance is often determined for a pair of detectors, rather than
the entire system. Intrinsic measurements are useful because they reflect the
best possible performance and can help isolate the source of any performance
degradations observed clinically. However, these measures are typically
performed under non-clinical conditions and will not reflect the performance
of the nuclear medicine imager for patient studies. Intrinsic measures also tend
to be measurements of an isolated characteristic of the system, rather than its
impact on imaging studies. They reflect the limits of performance achievable by
the detection system and electronics without collimators or image reconstruction.
8.1.3. Extrinsic performance
Extrinsic, or system, performance measures are made on the complete
nuclear medicine imager under conditions that are more clinically realistic,
although even these measures may not show the full clinical performance of
the system. On a gamma camera, extrinsic measurements are made with the
collimator in place; for SPECT and PET systems, the performance is often
measured on the reconstructed image. The extrinsic performance of a system
gives an indication of how well all of the components of the imager work together
to yield the final image. As most extrinsic performance measurements attempt to
isolate a single aspect of imaging performance (e.g. spatial resolution, count rate
performance, sensitivity), the conditions of these measurements generally do not
match the conditions encountered in patient imaging studies. However, the results
of extrinsic performance measurements are generally good indicators of clinical
performance or may provide useful information about system optimization for
clinical studies.
236
237
CHAPTER 8
Number of events
Photopeak
FIG.8.1. An example of an energy spectrum, defined as the number of measured events with
a given amplitude plotted as a function of the amplitude, where the amplitude depends directly
on the energy deposited in the crystal.
Only rays that have not scattered in the body will provide accurate
information about the radiotracer distribution. Accordingly, the energy window
is optimal if it includes as many photopeak events as possible, since they are
more likely not to have interacted with the tissue, and as few lower energy events
as possible, since they are more likely to be the result of one or more Compton
scatter interactions in the tissue. As the energy resolution worsens, however, it
is necessary to accept more low energy events because the photopeak includes
lower energy rays. For example, for detection of 511keV annihilation photons,
the lower energy threshold for BGO (1520% energy resolution) was typically
set to 350380keV, while that for LSO (12% energy resolution) is 440460keV
and for LaBr3 (67% energy resolution) the lower energy threshold can be set as
high as 480490keV without loss of unscattered rays.
8.2.3. Impact of energy resolution on extrinsic imager performance
The energy resolution is an intrinsic measure of detector performance; it
defines theminimum width of the energy window for a given radiotracer. The
energy window in turn affects the amount of scattered photons accepted. The ratio
of scattered events to total measured events, the scatter fraction, is an extrinsic
performance characteristic that is of concern, especially for quantitative imaging.
In PET systems, for example, the clinical image is assumed to be linearly related
to the activity uptake; because scatter adds a smoothly varying background to the
image, it degrades the quantitative accuracy of the image and adds to the image
noise, even when accurately estimated and subtracted.
There are two major types of scattered event, those where the initial ray
scattered in the body and those where the ray was not completely absorbed in
the detector but instead scattered, losing some but not all of its energy. In both
cases, the measured energy of the ray is lower than the energy of the original
photon because some energy is given up to the electron, and the measured
position may no longer be related to the original source of the ray because the
scattered photon does not travel along the same direction as the original ray. For
typical patient sizes, scattering in the body is much more significant than detector
scattering.
The scatter fraction is an extrinsic performance measure that describes the
sensitivity of a nuclear medicine imager to scattered events. The measurement
involves imaging a line source in a uniformly filled phantom of a specified size
at a low activity level, where scattered and unscattered events can be reasonably
well differentiated. As the amount of scatter depends on the size and distribution
of scattering material in the scanner, the measured scatter fraction cannot be used
to infer the amount or distribution of scatter in patient images. However, it is a
good indicator of the relative sensitivity of the system to scatter.
239
CHAPTER 8
apparent when rays enter the crystal at an oblique angle to the face of the crystal
(e.g. near the radial edge of a system comprising a ring of detectors). In that case,
the rays can completely pass through the entrance crystal before interacting in
a neighbouring crystal. The ray is then mis-positioned as though it had entered
the neighbouring crystal or in some intermediate location, depending on the
relative amounts of energy deposited by the two interactions.
Spatial resolution is also affected by the energy of the photon and, for
scintillation detectors, the efficiency of collection of the scintillation light by the
PMTs. The energy of the ray that is deposited in the crystal determines the
amplitude of the measured signal, which in turn defines how accurately it can be
localized in the detector. The spatial resolution measured in a given crystal with
99m
Tc (140keV) is inferior compared to that which would be measured with a
511keV photon.
As will be discussed later, the spatial resolution can also depend on the
count rate or amount of activity in the scanner. As the count rate increases, there
is an increased chance that two events will be detected at the same time in nearby
locations in the detector. These events will pile up and appear as a single event
at an intermediate location with a summed energy. This can lead to a loss of
resolution with increasing activity.
8.3.2. General measures of spatial resolution
There are several ways to characterize the spatial resolution, whether of a
detector or of a complete system. The point spread function (PSF) and line spread
function (LSF) are the profiles of measured counts as a function of position
across the point/line source. Rather than showing the complete profiles, however,
it is more convenient to characterize them by simple measures. The full width at
half maximum (FWHM) and full width at tenth maximum (FWTM) are useful
to describe the widths of the profile although they do not give information about
any asymmetry in the response. The equivalent width was defined as a way to
combine the FWHM and FWTM into a single parameter and describe the shape
of the profile in a simple way; it is defined as the width of a box function with
a height equal to the maximum amplitude of the profile and an area equal to
the total number of counts in the profile above 1/20 of its maximum amplitude.
Reducing the PSF or LSF to a few parameters carries with it a loss of information
about the spatial response of the imager; for example, LSFs or PSFs can have
very different shapes and still have the same FWHM.
The modulation transfer function (MTF) is one way to more completely
characterize the ability of a system to reproduce spatial frequencies. The MTF
is calculated as the Fourier transform of the PSF and is a plot of the response of
a system to different spatial frequencies. High spatial frequencies correspond to
241
CHAPTER 8
fine detail and sharp edges, while low spatial frequencies correspond to coarse
detail. The better the response at high frequencies, the smaller the structures
that can be resolved. A flat response across all spatial frequencies means that
the system most accurately reproduces the object. As it is difficult to compare
imaging performance based on the MTF, however, the FWHM and FWTM are
used to characterize spatial resolution.
8.3.3. Intrinsic measurement spatial resolution
The intrinsic spatial resolution is a measure of the resolution at the detector
level (or detector pair level for PET) without any collimation. It defines the
best possible resolution of the system, since later steps in the imaging hardware
degrade the resolution from the detector resolution. On gamma cameras, the
intrinsic resolution is determined using a bar phantom with narrow slits of
activity across the detector. On PET systems, the intrinsic resolution is measured
as a source is moved between a pair of detectors operating in coincidence. The
FWHM and FWTM of profiles of detected counts as a function of position are
taken as measures of the intrinsic spatial resolution. In both cases, the intrinsic
spatial resolution sets a limit on the resolution but does not translate easily into
a clinically useful value because other components of the imager impact the
resolution in the image.
8.3.4. Extrinsic measurement spatial resolution
The spatial resolution of a nuclear medicine imager depends on many
factors other than just the detectors. The linear and angular sampling play a
significant role: to preserve the intrinsic resolution, the imager should be sampled
every 0.1FWHM. Under-sampling leads to small structures being missed in
the image. For single-photon imagers, a collimator is used to limit the direction
of rays incident on the detector. Collimators are designed for specific purposes
(e.g. sensitivity or resolution) and/or specific radionuclides. As the hole size and
spacing of a collimator will affect the spatial sampling, each collimator will lead
to different system spatial resolution.
The reconstruction processing performed to create tomographic images
in SPECT or PET also affects the image resolution. Reconstruction algorithms
are generally chosen to preserve as much fine detail and edge information as
possible, while keeping image noise sufficiently low so that it is not confused
with actual structure. The parameters of reconstruction can, therefore, change
with the imaging study and with the number of events measured.
The spatial resolution is not constant throughout the imaging field of view
(FOV). For PET systems, the resolution does not vary significantly with location
242
of the source between two detectors in a detector pair, but the systems radial
resolution often degrades as the source is moved radially outwards from the
centre of the scanner. For gamma cameras, the resolution degrades as the source
is moved away from the detector face. For this reason, system spatial resolution
measurements are performed with the source at different locations in the imaging
FOV.
Extrinsic measures of spatial resolution are made under more clinically
realistic conditions and include the effects of the collimator (for single photon
imaging) and reconstruction processing. The extrinsic spatial resolution is
typically measured with a small point or line source of activity of a sufficiently low
amount such that effects seen at high count rates (i.e. mis-positioning of events)
are negligible. Measurements of system spatial resolution can be performed
in air or with scattering material added. A stationary source is positioned at
specified locations throughout the nuclear medicine imagers FOV. The spatial
resolution is determined from the images, including any reconstruction or
processing steps, by drawing profiles through the source. No spatial smoothing
or other post-processing is performed. In addition, any resolution modelling or
resolution recovery techniques applied during clinical reconstruction are not
used in the measurement of extrinsic resolution. The extrinsic spatial resolution
is distinguished from the intrinsic resolution because it includes many effects
not seen with the intrinsic resolution: collimator blurring, linear and angular
sampling, reconstruction algorithm, spatial smoothing, and impact of electronics.
While the extrinsic resolution measurement reflects the resolution of the
complete imaging system, the spatial resolution achieved in patient images
is typically somewhat worse than the extrinsic spatial resolution. The spatial
sampling is finer than occurs clinically because the pixel size is typically smaller
than that used for patient studies in order to sample the PSF or LSF sufficiently.
For imagers that reconstruct the data, the reconstruction algorithm in the
performance measurement is often not the technique applied to clinical data;
an analytical algorithm such as filtered back projection is generally specified
for tomographic systems to standardize results between systems. Another key
determinant of the clinical resolution is noise in the data that necessitates noise
reduction through spatial averaging (smoothing), which blurs the image. For
data with high statistics, a sharp reconstruction algorithm can be applied, and the
resulting image has good spatial resolution. For more typical nuclear medicine
studies, where the number of detected events is limited, some form of spatial
smoothing is applied, with the resulting blurring of fine structures.
243
CHAPTER 8
two events will arrive within the resolving time of the detector; this possibility
increases as the activity in the imager increases.
There are two kinds of dead time: non-paralysable and paralysable (seealso
Chapter6). Non-paralysable dead time arises when an event causes the system
to be unresponsive for a period of time, so that any later events that arrive
during that time are not recorded. For paralysable dead time, the second event
is not only not recorded but also extends the period for which the electronics
are unresponsive. At moderate count rates, paralysable and non-paralysable dead
times are the same; it is only at high count rates that the two types of dead time
differ (seeFig.8.2). It can be seen that systems with non-paralysable dead time
saturate at high count rates, while those with paralysable dead time peak and then
record fewer events as the activity increases. This leads to an ambiguity in the
measured count rate: the same observed count rate corresponds to two different
activity levels. The system dead time performance of nuclear medicine scanners
is typically intermediate between paralysable and non-paralysable dead time
because some components have paralysable dead time while other components
have non-paralysable dead time.
No dead time
Non-paralysable
dead time
Paralysable
dead time
245
CHAPTER 8
With increased dead time, additional activity injected in the patient does not
lead to a comparable improvement in image quality or reduction in image noise.
Dead time losses depend on the single event rate, coincidence count rate (for
PET), and the analogue and digital design characteristics of the nuclear medicine
imager. Dead time losses can depend on the activity distribution, especially for
PET because of the different single photon and coincidence rate relationship
with source distribution. They also depend on the radioisotope because dead
time results from all rays that interact in the detector, not just the photons that
fall within the energy window. For imaging studies with a large dynamic range
(e.g. cardiac scans), count rate performance is critical.
To correct for event losses due to dead time, a correction based on a
decaying source study is often applied to clinical data. The dead time correction
will generally correct for the loss of counts, so that the number of counts in the
image is independent of the count rate; it does not, however, compensate for the
higher image noise that arises because fewer events are actually measured.
8.4.3. Count rate performance measures
The generic measurement of count rate performance involves determining
the response of the nuclear medicine imager as a function of activity presented
to the system. Typically, this requires starting with a high amount of activity and
acquiring multiple images over time as the activity decays. The energy window is
set at low activity levels and is not changed at higher activities to accommodate
a shift in the photopeak due to pile-up effects. By comparing the observed events
with the counts that would be expected after decay correction of events detected
at low activities, the system dead time can be determined as a function of activity
level. It is especially important to determine the maximum measurable count
rate, since higher activities would result in no increase and perhaps a decrease
in detected counts. While most count rate performance measures call for starting
with a high activity and imaging as the activity decays, if too high an activity
is used at the beginning of the measurement, the detector may show effects of
saturation during later measurements at lower activities. Therefore, the amount
of activity at the beginning of the study must be sufficient to measure the peak
count rate but not be so high as to saturate the system for a significant period.
Intrinsic count rate performance measurements are performed with a source
in air and without any detector collimation. This is typically performed only on
gamma cameras. The system, or extrinsic, count rate performance is measured
with the complete system, including any collimation or detector motion, and a
distributed source with scattering material (e.g. a cylindrical phantom of specified
dimensions or a source placed within scattering material). The scatter adds low
246
energy photons that contribute to pile-up and dead time that are not present in the
intrinsic measurement.
For PET, random coincidences also increase as the activity increases;
whereas the true coincidence rate would increase linearly with activity in the
absence of dead time losses, the random coincidence rate increases quadratically
with activity, so that their impact becomes greater at higher count rates. The
activity where the random rate equals the true event rate is of importance, in
addition to the activity and count rate at which the true count rate saturates or
peaks. A global measure of the impact of random coincidences and scatter on
image quality is given in the noise equivalent count rate (NECR) defined as:
NECR =
T2
(8.1)
T + S + kR
where T, S and R are the true, scatter and random coincidence count rates,
respectively, and k is a factor that is equal to one if a smooth estimate of random
coincidences is used and two if a noisy estimate is used. This parameter does not
include reconstruction effects or local image noise differences but can be useful
in determining optimal activity ranges.
For systems that correct for dead time, it is important to apply dead time
correction and to reconstruct the data in addition to looking at the count rates.
The quantitative accuracy of the dead time correction is determined by looking at
a large region of interest in decay-corrected, reconstructed images; the counts in
the region of interest should be independent of activity level. It is also important
to examine the images at high activities for artefacts that may arise due to
spatially-varying mis-positioning effects or inaccuracies in various corrections
with increased activity.
8.5. SENSITIVITY
8.5.1. Image noise and sensitivity
Images from nuclear medicine devices are typically noisy because the
amount of activity that can be safely injected and/or the scan duration without
patient discomfort or physiological changes in activity distribution is limited. The
number of detected events for a given amount of activity in the imaging systems
FOV is an important performance characteristic because a more efficient imager
can achieve low image noise with lower injected activity than a less efficient
system. Noise in the image can affect both visual (qualitative) image quality
247
CHAPTER 8
248
249
CHAPTER 8
250
CHAPTER 9
PHYSICS IN THE RADIOPHARMACY
R.C. SMART
Department of Nuclear Medicine,
St. George Hospital,
Sydney, Australia
9.1. THE MODERN RADIONUCLIDE CALIBRATOR
9.1.1. Construction of dose calibrators
Throughout the world, the instrument that is used in nuclear medicine to
measure radioactivity is the calibrated re-entrant ionization chamber, commonly
known as a radionuclide calibrator or dose calibrator. Commercial systems
comprise a cylindrical well ionization chamber connected to a microprocessorcontrolled electrometer providing calibrated measurements for a range of common
radionuclides (Fig. 9.1). The chamber is usually constructed of aluminium filled
with argon under pressure (typically 12 MPa or 1020 atm). Dose calibrators
with reduced gas pressure are available for positron emission tomography (PET)
production facilities where very large activities may be measured.
251
CHAPTER 9
A well liner, made of low atomic number material (e.g. lucite (Perspex))
which can be removed for cleaning, prevents the ionization chamber from
becoming accidentally contaminated. A sample holder is provided into which
a vial or syringe can be placed to ensure that it is positioned optimally within
the chamber. The dose calibrator may include a printer to document the activity
measurements or an RS-232 serial communications port or USB port to interface
the calibrator to radiopharmacy computerized management systems.
The chamber is typically shielded by the manufacturer with 6 mm of
lead to ensure low background readings. Depending on the location of the dose
calibrator, the user may require additional shielding, either to reduce background
in the chamber or to protect the operator when measuring radionuclides of
high energy and activity. However, this will alter the calibration factors due to
backscattering of photons together with the emission of Pb K shell X rays arising
from interactions within the lead shielding. If additional shielding is used, the
dose calibrator should be recalibrated or correction factors determined to ensure
that the activity readings remain correct.
As examples of commercial systems, the specifications of two widely used
dose calibrators are given in Table 9.1.
TABLE9.1. SPECIFICATIONS
CALIBRATORS
Specification
OF
TWO
COMMERCIAL
Capintec CRC-25R
DOSE
Atomlab 200
Ionization chamber
dimensions
26 cm deep 6 cm diameter
Measurement range
Nuclide selection
8 pre-set, 5 user-defined
(80 radionuclide calibrations in
memory)
10 pre-set, 3 user-defined
(94 radionuclide calibrations in
manual)
Display units
Bq or Ci
Bq or Ci
Electrometer accuracy
<2%
1%
Response time
Within 2 s
Repeatability
1%
0.3%
252
p i (E i ) i (E i )
(9.1)
where
pi(Ei) is the emission probability per decay of photons of energy Ei;
and i(Ei) is the energy dependent photon efficiency of the ionization chamber.
Figure 9.2 illustrates a typical efficiency curve as a function of photon
energy. Thin-walled aluminium chambers show a strong peak in efficiency
at photon energies around 50 keV. This results from the rapid increase of the
probability of photoelectric interactions in the filling gas with decreasing energy
and the low energy cut-off with aluminium walls at about 20 keV.
Knowing the energy dependent photon efficiency curve for a specific
ionization chamber will enable the nuclide efficiency for any radionuclide to be
determined from the photon emission probability for each photon in its decay.
The 511 keV annihilation radiation will be measured when the activity of
positron emitting radionuclides is to be assayed. A single calibration factor for all
positron emitters cannot be used as the emission probability of the positrons must
be taken into account. The probability (branching ratio) of positron emission for
11
C is 100% and for 18F is 96.7%.
253
CHAPTER 9
established over the full range of intended use when the unit is commissioned
and verified as part of the quality control programme (see Section 9.2.1.2).
CHAPTER 9
also be due to contamination on either the source holder itself or the well liner.
Most dose calibrators provide a background subtraction feature. An accurate
measurement of the existing radiation level is made by the calibrator (usually
integrating over several minutes to improve precision) which is then automatically
subtracted from each subsequent reading. This may lead to erroneous results if
the background radiation has changed since it was measured due to the presence
of additional nearby sources or contamination. It is, therefore, essential to make
regular checks of the background radiation level.
0.2 mm
125
3%
7%
123
0.6%
1.5%
111
In
0.2%
0.4%
0.1%
0.25%
131
mL Sabre
mL Sabre
mL Sabre
mL Gillette
mL Gillette
mL Sabre
FIG.9.5. The effects of geometry and sample size on dose calibrator readings, demonstrated
for 111In measured in varying syringes (reproduced from Ref. [9.1]).
radiation will change as the source volume changes. This will be particularly
important for radionuclides with low energy components such as 123I. For 99mTc,
the correction will usually be less than 1% but should be confirmed for a new
dose calibrator or when the supplier of the syringes changes.
9.1.3.7. Source position
The manufacturers source holder is designed to keep the source at the area
of maximum response on the vertical axis of the well. Variations in response
due to changes in vertical height or horizontal position of a few millimetres are
usually insignificant.
257
CHAPTER 9
the vials normally used in the practice. Similarly, the calibration of the activity
within the size of syringe to be used clinically should be established. Published
results comparing the intrinsic efficiencies of dose calibrators from five different
manufacturers found that all systems had a good calibration for 32P, a reduction
in efficiency of approximately 1020% for 89Sr, and a wide divergence in
efficiency for 90Y. For this radionuclide, the results obtained using the calibration
factors supplied by the manufacturers ranged from 64 to 144% of the true value,
re-emphasizing the need for the calibration to be confirmed within the nuclear
medicine department.
Several emitters used for radionuclide therapy include a ray component.
These radionuclides include 131I (364 keV, 81.5% abundance) and 186Re (137 keV,
9.5% abundance). For these radionuclides, the ionization chamber efficiency
is primarily determined by the contribution and the manufacturers supplied
calibrations will usually be accurate to within 10%.
9.1.5. Problems arising from radionuclide contaminants
Unfortunately, it is often not possible for a solution of a radionuclide to
be totally free of other radionuclides. The proportion of the total radioactivity
that is present as a specific radionuclide is defined as the radionuclide purity.
National and international pharmacopoeia specify the radionuclidic purity of
a radiopharmaceutical. For example, the European Pharmacopoeia entry for
67
Ga-citrate injection requires that no more than 0.2% of the total radioactivity
be due to 66Ga. This requirement must be met at all times up to the expiry time of
the product. The US Pharmacopoeia is less stringent, specifying that not less than
99% of the total radioactivity be present as 67Ga at the time of calibration.
The presence of contaminants, even when less than 1% of the total activity,
can have a marked effect on the ionization chamber current and, thus, on the
measured activity. The British Pharmacopoeia specification for 201Tl-thallous
chloride requires that Not more than 2.0 percent of the total radioactivity
is due to thallium-202 and not less than 97.0 percent is due to thallium-201.
Thallium-202 has a half-life of 12.2 d and the predominant photon energy is
440 keV. Another possible contaminant is 200Tl which has a half-life of 1.09 d
and prominent energies at 368 keV and 1.2 MeV. Both of these radionuclide
contaminants will have a high efficiency in a dose calibrator. As the half-life of
202
Tl is significantly longer than that of 201Tl, the relative proportion of 202Tl to
201
Tl will increase over time. If the accuracy of a dose calibrator is to be checked
with a 201Tl source, the apparent accuracy could change depending on when the
measurements are taken relative to the stated calibration date. The presence of
these high energy contaminants will have an adverse effect on image quality due
to increased septal penetration and will also lead to an increased radiation dose to
259
CHAPTER 9
the patient. The effective dose, in millisieverts per megabecquerel, for 200Tl, 201Tl
and 202Tl is 0.238, 0.149 and 0.608, respectively. It should be noted that these
problems will be increased if the radiopharmaceutical is administered prior to the
nominal calibration date, as the proportion of 200Tl will be higher.
9.2. DOSE CALIBRATOR ACCEPTANCE TESTING
AND QUALITY CONTROL
9.2.1. Acceptance tests
Acceptance tests for dose calibrators should include measurements of the
accuracy, reproducibility, linearity and geometry response. These are required to
ensure that the unit meets the manufacturers specifications and to give baseline
figures for subsequent quality control.
9.2.1.1. Accuracy and reproducibility
The accuracy is determined by comparing activity measurements using
a traceable calibrated standard with the suppliers stated activity, corrected for
radioactive decay. The accuracy is expressed in per cent deviation from the actual
activity and should be measured for all radionuclides to be used routinely. It is
recommended that measurements of a long lived source, for example 137Cs, be
recorded at the time of initial testing for each radionuclide setting to be used
clinically for later quality control.
The reproducibility, or constancy, can be assessed by taking repeated
measurements of the same source. If the sample holder is removed from the
chamber between each measurement, the measured reproducibility will include
any errors associated with possible variations in source position.
9.2.1.2. Linearity
There are several approaches to the measurement of the linearity response of
a dose calibrator. Typically, a vial containing a high activity of 99mTc is measured
repeatedly over a period of at least 5 d. During this time, a 100 GBq source will
decay to 0.1 MBq. It is essential that the initial activity represents the highest
activity that is likely to be used in clinical practice, which will usually be the
first elution from a new Mo/Tc generator. A semi-log plot of the measurements,
corrected for background, should follow the expected decay of the radionuclide.
Any deviation from the expected line at high activities indicates saturation of
response of the ionization chamber. Accurate background measurements, at the
260
time of each assay, are essential as the background will become an increasing
component of the reading as the source decays. Deviations from linearity at low
activities are likely to be due to radionuclide impurities, such as 99Mo in vials
containing 99mTc.
Another approach that can be used to check the linearity requires a series of
radioactive sources that cover the range of activities to be measured. The sources
should all be prepared from the same stock solution and the dispensed volumes
measured accurately by weighing the vials pre- and post-dispensing. The volume
of liquid in each vial should be adjusted with a non-radioactive solution, so that
the volume is identical in each vial, to eliminate any geometry dependency in
the measurement. The measured activities are corrected for decay to the time of
measurement of the first vial and plotted against the dispensed volumes to assess
the calibrator linearity. The error in this method will be increased if there are
any small variations in the vial wall thickness as the same vial is not used for all
measurements.
Finally, linearity can be assessed by repeated measurements on a single
vial using a series of graded attenuators appropriate for a specified test source to
reduce the measured ionization current. These are typically a series of concentric
cylinders that fit over the vial. The attenuation through each cylinder must be
accurately known to use this method.
9.2.1.3. Geometry
The measured activity may vary with the position of the source within
the ionization chamber, with the composition of the vial or syringe, or with the
volume of liquid within the vial or syringe. Appropriate correction factors must be
established for the containers and radionuclides to be used clinically, especially
if radionuclides that have a substantial component of low energy photons, such
as 123I, are to be used. For each vial or syringe to be used clinically, a series of
measurements should be undertaken in which the activity remains constant, but
the volume is increased from 10 to 90% of the maximum volume by the addition
of water or saline. Corrected for decay, a plot of activity against volume should
be a straight horizontal line. Any deviations from this can be used to calculate the
appropriate correction factor.
Similarly, vial to syringe correction factors can be determined by measuring
the activity transferred from the vial to the syringe (original vial activity minus
residual activity) and comparing this to the activity measured in the syringe itself.
Geometry dependencies should not change over time; however, if
the practitioner changes the manufacturer of the syringes or obtains the
radiopharmaceuticals in a different vial size, a new set of calibration factors
should be determined.
261
CHAPTER 9
262
The accuracy of the instruments, at activity levels above 3.7 MBq shall be
such that the measured activity of a standard source shall be within 10%
of the stated activity of that source [9.5];
The reproducibilityshall be such that all of the results in a series
of ten consecutive measurements on a source of greater than 100 Ci
(3.7 106 Bq) in the same geometry shall be within 5% of the average
measured activity for that source [9.5].
9.4. NATIONAL ACTIVITY INTERCOMPARISONS
National metrology institutes are responsible for the development and
maintenance of standards, including activity standards. These institutes, often
in collaboration with the relevant national professional body, have undertaken
national comparisons of the accuracy of the dose calibrators used in clinical
practice. Such comparisons have used, where possible, the clinical radionuclides
67
Ga, 123I, 131I, 99mTc and 201Tl, and have been carried out in Argentina, Australia,
Brazil, Cuba, the Czech Republic, Germany, India and the United Kingdom.
In some countries, such as Cuba and the Czech Republic, participation in the
comparison is mandatory, while in many other countries it is voluntary. The
surveys can also be used to measure the reproducibility of the calibrators.
As an example, Table 9.3 shows the results from a survey undertaken in
Australia in 2007.
TABLE9.3. SUMMARY OF THE RESULTS OF THE DOSE CALIBRATOR
SURVEY UNDERTAKEN IN AUSTRALIA IN 2007
Radionuclide
99m
Tc
131
67
201
Ga
Tl
No. of calibrators
167
164
116
162
Within 5% error
86%
80%
84%
73%
98%
95%
97%
94%
Within 10%
reproducibility
100%
100%
100%
100%
These surveys also offer the opportunity for the calibration factor to be
adjusted if a dose calibrator is found to be operating with an error of >10%.
263
CHAPTER 9
a value
99m
0.706
99m
0.801
Tc-DMSA
Tc-DTPA
99m
0.520
99m
0.849
Tc-MAG3
Tc-HMPAO
99m
Tc-MAA
0.871
99m
0.763
Tc-phosphonates
99m
a value
Tc-IDA
0.840
Tc-tetrafosmin
0.834
99m
99m
Tc-red cells
0.859
Tc-white cells
0.869
99m
18
0.842
99m
Tc-sestamibi
Radiopharmaceutical
F-FDG
0.782
Ga-citrate
0.931
I or 131I iodide
1.11
67
123
http://www.eanm.org/publications/dosage_calculator.php?navId=285
265
CHAPTER 9
266
Dosage Card
(Version 1.2.2014)
Multiple of Baseline Activity
Weight
Class
Class
Class
Weight
Class
Class
kg
kg
Class
C
32
3.77
7.29
14.00
1.12
1.14
1.33
34
3.88
7.72
15.00
1.47
1.71
2.00
36
4.00
8.00
16.00
1.71
2.14
3.00
38
4.18
8.43
17.00
10
1.94
2.71
3.67
40
4.29
8.86
18.00
12
2.18
3.14
4.67
42
4.41
9.14
19.00
14
2.35
3.57
5.67
44
4.53
9.57
20.00
16
2.53
4.00
6.33
46
4.65
10.00
21.00
18
2.71
4.43
7.33
48
4.77
10.29
22.00
20
2.88
4.86
8.33
50
4.88
10.71
23.00
22
3.06
5.29
9.33
52-54
5.00
11.29
24.67
24
3.18
5.71
10.00
56-58
5.24
12.00
26.67
26
3.35
6.14
11.00
60-62
5.47
12.71
28.67
28
3.47
6.43
12.00
64-66
5.65
13.43
31.00
30
3.65
6.86
13.00
68
5.77
14.00
32.33
b) 123ImIBG,
3 kg:
This card is based upon the publication by Jacobs F, Thierens H, Piepsz A, Bacher K, Van de
Wiele C, Ham H, Dierckx RA. Optimized tracer-dependent dosage cards to obtain weightindependent effective doses. Eur J Nucl Med Mol Imaging. 2005 May; 32(5):581-8.
This card summarizes the views of the Paediatric and Dosimetry Committees of the EANM
and reflects recommendations for which the EANM cannot be held responsible.
The dosage recommendations should be taken in context of good practice of nuclear
medicine and do not substitute for national and international legal or regulatory provisions.
Android App
iPhone App
267
CHAPTER 9
Class
Baseline Activity
(for calculation
purposes only)
Minimum
Recommended
Activity1
MBq
MBq
I (Thyroid)
0.6
I Amphetamine (Brain)
13.0
18
123
123
5.3
10
12.8
10
I mIBG
28.0
37
I mIBG
5.6
35
F FDG-PET torso
25.9
26
F FDG-PET brain
14.0
14
F Sodium fluoride
10.5
14
Ga Citrate
5.6
10
Tc ALBUMIN (Cardiac)
56.0
80
2.8
10
Tc COLLOID (Liver/Spleen)
5.6
15
Tc COLLOID (Marrow)
21.0
20
Tc DMSA
6.8
18.5
14.0
20
34.0
20
32.0
110
Tc HMPAO (Brain)
51.8
100
Tc HMPAO (WBC)
35.0
40
Tc IDA (Biliary)
10.5
20
Tc MAA / Microspheres
5.6
10
Tc MAG3
11.9
15
Tc MDP
35.0
40
Tc Pertechnetate (Cystography)
1.4
20
10.5
20
35.0
80
Tc Pertechnetate (Thyroid)
5.6
10
99m
56.0
80
Tc SestaMIBI/Tetrofosmin
(Cancer seeking agent)
63.0
80
Tc SestaMIBI/Tetrofosmin2
(Cardiac rest scan 2-day protocol min)
42.0
80
Tc SestaMIBI/Tetrofosmin2
(Cardiac rest scan 2-day protocol max)
63.0
80
Tc SestaMIBI/Tetrofosmin2
(Cardiac stress scan 2-day protocol min)
42.0
80
Tc SestaMIBI/Tetrofosmin2
(Cardiac stress scan 2-day protocol max)
63.0
80
Tc SestaMIBI/Tetrofosmin2
(Cardiac rest scan 1-day protocol)
28.0
80
Tc SestaMIBI/Tetrofosmin2
(Cardiac stress scan 1-day protocol)
84.0
80
2.8
20
70.0
100
123
123
123
131
18
18
18
67
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99m
99
The minimum recommended activities are calculated for commonly used gamma cameras or positron
emission tomographs. Lower activities could be administered when using systems with higher counting
efficiency.
The minimum and maximum values correspond to the recommended administered activities in the
EANM/ESC procedural guidelines (Hesse B, Tagil K, Cuocolo A, et al). EANM/ESC procedural guidelines for
myocardial perfusion imaging in nuclear Cardiology. Eur J Nucl Med Mol Imaging. 2005 Jul;32(7):855-97.
This is the activity load needed to prepare the Technegas device. The amount of inhaled activity will be lower.
268
Non-designated areas
including personal
clothing (Bq/cm2)
18
100
1000
32
100
1000
51
1000
10000
50
67
1000
10000
50
89
100
1000
90
Nuclide
F
P
Cr
Ga
Sr
100
1000
99m
Y
Tc
1000
10000
50
111
In
1000
10000
50
123
1000
10000
50
125
100
1000
131
100
1000
177
1000
10000
50
201
1000
10000
50
I
I
I
Lu
Tl
269
CHAPTER 9
The ICRP recommends that the ring monitor be worn on the middle finger with
the element positioned on the palm side, and that a factor of three should be
applied to derive an estimate of the dose to the tip. If the dosimeter element is
worn facing towards the back of the hand, a factor of six should be applied.
The dose to the fingers is critically dependent on the dispensing technique
used and the skill of the operator. It is important that staff undertake extensive
training in the dispensing technique with non-radioactive solutions prior to
dispensing radiopharmaceuticals for the first time. This is particularly important
with PET radiopharmaceuticals as the specific dose rate constant is much higher
for positron emitters than for radionuclides used for single photon imaging.
9.7. PRODUCT CONTAINMENT ENCLOSURES
9.7.1. Fume cupboards
A fume cupboard is an enclosed workplace designed to prevent the spread
of fumes to the operator and other persons. The fumes can be in the form of
gases, vapours, aerosols or particulate matter. The fume cupboard is designed
to provide operator protection rather than protection for the product within the
cabinet. A fume cupboard would, therefore, not be suitable as an area for cell
labelling procedures as this requires that the blood remain sterile at all times.
Fume cupboards usually include a transparent safety screen which can be
adjusted either vertically (more commonly) or horizontally to vary the size of the
working aperture into the cabinet. Some cupboards are available with a lead glass
safety screen to minimize the need for additional radiation shielding. The most
common type of fume cupboard is known as a variable exhaust air volume fume
cupboard which maintains a constant velocity of air into the cabinet (the face
velocity) irrespective of the sash position. Figure 9.7 shows a fume cupboard
suitable for use with radioactive materials.
Fume cupboards are available which discharge the exhaust air directly,
or after carbon filtration, to the atmosphere, usually above the building. Other
cabinets, known as recirculating fume cabinets, rely on filtration or absorption
to remove airborne contaminants released in the cabinet, so that the air may be
safely discharged back into the laboratory. Recirculating fume cabinets are not
normally applicable for use with radioactive materials.
Any installed fume cupboards must meet the requirements of the local
appropriate standard and any air discharged to the atmosphere must meet the
requirements of the appropriate regulatory authority. The standard will usually
specify the minimum face velocity through the working aperture (e.g. 0.5 m/s).
This should be checked on a regular basis and should be measured with the
271
CHAPTER 9
aperture fully open and in its minimum position. At the minimum position, the
face velocity may need to be higher to retain a constant exhaust rate from the
cabinet.
Before initial use, and as part of a regular quality control schedule, a smoke
test should be performed. This is to provide visual evidence of fume containment
within, or escape from, the fume cupboard. Smoke is released in and around the
fume cupboard and the visual pattern of airflow is observed. The results of the
smoke test must be documented and any reverse flows from the confines of the
cupboard corrected before subsequent use.
and the arms of the operator. During use, the filtered air may escape from the
front of the cabinet, when the airflow is disturbed, so operator protection cannot
be ensured.
9.7.3. Isolator cabinets
Isolator cabinets provide both operator and product protection. Figure 9.8
shows an example of a blood cell labelling isolator. The product is manipulated
through glove ports so that the interior of the cabinet is maintained totally sterile
and full operator protection is provided. Airflow within the isolator is deliberately
designed to be turbulent so that there are no dead spaces within the cabinet. The
unit illustrated incorporates a centrifuge which can be controlled externally. A
dose calibrator can be included within the isolator, so that the cell suspension
does not need to be removed from the isolator for the activity to be measured.
The isolator incorporates timed interlocks on the vacuum door seals to ensure
that the product remains sterile.
273
CHAPTER 9
14
0.156
0.30
32
1.709
8.2
89
1.463
6.8
90
2.274
11
Radionuclide
C
P
Sr
Y
The highest surface dose rates encountered in the radiopharmacy are likely
to be from 99Mo/99mTc generators which may contain >100 GBq of 99Mo. The
primary emission from 99Mo has an energy of 740 keV, so requires several
centimetres of lead shielding to reduce the dose rates to an acceptable level. The
generator, as supplied, will already contain substantial shielding but additional
shielding will usually be required. This may be available from the generator
274
275
CHAPTER 9
276
FIG.9.12. A tungsten syringe shield for emitting radionuclides and a Perspex syringe shield
for pure emitters.
277
CHAPTER 9
99
1.00
1.00
1.00
1.00
1.00
1.00
0.876
0.105
0.455
0.769
0.136
0.891
0.776
0.00835
0.280
0.601
0.0709
0.787
0.694
6.52 104
0.191
0.475
0.0557
0.690
0.623
5.09 10
0.135
0.379
0.0485
0.602
0.561
3.97 106
0.0983
0.306
0.0438
0.523
0.507
0.0730
0.248
0.0430
0.452
0.458
0.0551
0.203
0.0422
0.390
0.414
0.0420
0.168
0.0415
0.336
0.375
0.0324
0.139
0.0408
0.289
10
0.340
0.0253
0.116
0.0291
0.249
12
0.279
0.0157
0.0829
0.0282
0.183
14
0.229
0.0102
0.0605
0.0273
0.135
16
0.188
0.00682
0.0451
0.0193
0.0990
18
0.154
0.00476
0.0342
0.0187
0.0728
20
0.127
0.00345
0.0263
0.0132
0.0535
25
0.0774
0.00177
0.0143
0.00893
0.0247
30
0.0473
0.00104
0.00805
0.00602
0.0114
40
0.0176
4.11 104
0.00267
0.00274
0.00240
0.00124
5.00 104
50
a
Mo
0.00659
99m
67
Tc
3.10 10
131
Ga
1.71 10
201
Tla
9.04 10
511 keV
The transmission data for 201Tl includes a contribution of 1.5% of the contaminant 200Tl, the
maximum level likely to be encountered in clinical practice.
99
1.00
1.00
1.00
1.00
1.00
1.00
10
0.845
0.779
0.884
0.916
0.759
0.958
20
0.718
0.607
0.769
0.825
0.581
0.909
30
0.614
0.473
0.661
0.735
0.449
0.852
40
0.527
0.368
0.564
0.649
0.349
0.789
50
0.454
0.287
0.477
0.570
0.274
0.722
60
0.393
0.224
0.402
0.498
0.217
0.653
70
0.341
0.174
0.338
0.434
0.173
0.584
80
0.296
0.136
0.282
0.377
0.139
0.518
90
0.258
0.106
0.236
0.327
0.112
0.456
100
0.225
0.0824
0.196
0.284
0.0912
0.399
120
0.172
0.0500
0.135
0.212
0.0612
0.301
140
0.132
0.0304
0.0928
0.158
0.0418
0.224
160
0.101
0.0184
0.0635
0.118
0.0290
0.166
180
0.0777
0.0112
0.0434
0.0879
0.0203
0.123
200
0.0598
0.00679
0.0296
0.0654
0.0143
0.0904
250
0.0312
0.00195
0.0113
0.0312
0.00607
0.0419
Mo
99m
Tc
67
Ga
131
201
Tla
511 keV
279
CHAPTER 9
99
Mo
0.0163
99m
67
Tc
131
Ga
5.60 104
5
0.00433
4
0.0149
201
Tla
511 keV
0.00262
4
0.0194
400
0.00443
4.61 10
500
0.00121
3.80 106 9.16 105 7.70 104 9.39 105 8.95 104
6.30 10
0.00339
4.95 10
0.00417
The transmission data for 201Tl includes a contribution of 1.5% of the contaminant 200Tl, the
maximum level likely to be encountered in clinical practice.
280
281
CHAPTER 9
282
CHAPTER 9
short lived radionuclides. Each package of waste (bag, sharps container, wheeled
bin) must be marked with the:
Radionuclide, if known;
Maximum dose rate at the surface of the container or at a fixed distance
(e.g. 1 m);
Date of storage.
The above information should be recorded, together with information
identifying the location of the container within the store, and the likely release
date (e.g. ten half-lives of the longest lived radionuclide in the container).
When the package is finally released for disposal, the record should be
updated to record the dose rate at that time, which should be at background levels,
the date of disposal, and the identification of the person authorizing its disposal.
Old sealed sources previously used for quality control or transmission
scans, such as 137Cs, 57Co, 153Gd and 68Ge, should be kept in a secure store until
the activity has decayed to a level permitted for disposal, or the source can be
disposed of by a method approved by the regulatory authority.
REFERENCES
[9.1] TYLER, D.K., WOODS, M.J., Syringe calibration factors for the NPL Secondary
Standard Radionuclide calibrator for selected medical radionuclides, Appl. Radiat.
Isot. 59(2003) 367372.
[9.2] INTERNATIONAL ELECTROTECHNICAL COMMISSION, Calibration and Usage
of Ionization Chamber Systems for Assay of Radionuclides, IEC 61145:1992, IEC
(1992).
[9.3] INTERNATIONAL ELECTROTECHNICAL COMMISSION, Medical Electrical
Equipment Radionuclide Calibrators Particular Methods for Describing
Performance, IEC 61303:1994, IEC (1994).
[9.4] INTERNATIONAL ELECTROTECHNICAL COMMISSION, Nuclear Medicine
Instrumentation Routine Tests Part 4: Radionuclide Calibrators, IEC/TR
61948-4:2006, IEC (2006).
[9.5] American National Standards Institute, Calibration and Usage of Dose Calibrator
Ionization Chambers for the Assay of Radionuclides, ANSI N42.13-2004, ANSI
(2004).
[9.6] INTERNATIONAL COMMISSION ON RADIOLOGICAL PROTECTION,
Radiation Dose to Patients from Radiopharmaceuticals, Publication 53, Pergamon
Press, Oxford and New York (1988).
285
CHAPTER 9
BIBLIOGRAPHY
GADD, R., et al., Protocol for Establishing and Maintaining the Calibration of Medical
Radionuclide Calibrators and their Quality Control, Measurement Good Practice Guide No. 93,
National Physical Laboratory, UK (2006).
GROTH, M.J., Empirical dose rate and attenuation data for radionuclides in nuclear medicine,
Australas. Phys. Eng. Sci. Med. 19 (1996) 160167.
NATIONAL HEALTH AND MEDICAL RESEARCH COUNCIL (Australia), Recommended
Limits on Radioactive Contamination on Surfaces in Laboratories, Radiation Health Series
No. 38, NHMRC (1995).
SCHRADER, H., Activity Measurements with Ionization Chambers, Monographie Bureau
International des Poids et Mesures No. 4 (1997).
286
CHAPTER 10
NON-IMAGING DETECTORS AND COUNTERS
P.B. ZANZONICO
Department of Medical Physics,
Memorial Sloan Kettering Cancer Center,
New York, United States of America
10.1. INTRODUCTION
Historically, nuclear medicine has been largely an imaging based
specialty, employing such diverse and increasingly sophisticated modalities as
rectilinear scanning, (planar) gamma camera imaging, single photon emission
computed tomography (SPECT) and positron emission tomography (PET).
Non-imaging radiation detection, however, remains an essential component of
nuclear medicine. This chapter reviews the operating principles, performance,
applications and quality control (QC) of the various non-imaging radiation
detection and measurement devices used in nuclear medicine, including survey
meters, dose calibrators, well counters, intra-operative probes and organ uptake
probes. Related topics, including the basics of radiation detection, statistics of
nuclear counting, electronics, generic instrumentation performance parameters
and nuclear medicine imaging devices, are reviewed in depth in other chapters
of this book.
10.2. OPERATING PRINCIPLES OF RADIATION DETECTORS
Radiation detectors encountered in nuclear medicine may generally
be characterized as either scintillation or ionization detectors (Fig. 10.1). In
scintillation detectors, visible light is produced as radiation excites atoms of a
crystal or other scintillator and is converted to an electronic signal, or pulse, and
amplified by a photomultiplier tube (PMT) and its high voltage (5001500 V).
In ionization detectors, free electrons produced as radiation ionizes a stopping
material within a sensitive volume are electrostatically collected by a bias voltage
(10500 V) to produce an electron signal. In both scintillation and ionization
detectors, the unprocessed signal is then shaped and amplified. For some types
of detector, the resulting pulses are sorted by their amplitude (or pulse height),
which is related to the X ray or ray energy absorbed in the detector.
287
CHAPTER 10
High voltage
5001500 V
Photomultiplier
tube
Photocathode
Detector
material
Anode
Bias voltage
10500 V
Cathode
X or ray
Crystal
X or ray
(a) Scintillation detector
FIG.10.1. Basic design and operating principles of (a) scintillation and (b) ionization
detectors.
At a bias voltage of 300 V, all of the primary electrons (i.e. the electrons
produced directly by ionization of the detector material by the incident radiation)
are collected at the anode and the detector signal is, thereby, maximized. Since
288
300
103
106
109
Recombination region
600
900
Bias voltage (V)
1200
FIG.10.2. The signal (expressed as the amplification factor, that is, the total number of
electrons per primary electron produced in the detector material) as a function of the bias
voltage for gas filled ionization detectors. The principal difference among such detectors is the
magnitude of the bias voltage between the anode and cathode. The amplification factors and
the voltages shown are approximate.
there are no additional primary electrons to collect, increasing the bias voltage
further (up to 600 V) does not increase the signal. The 300600 V range,
where the overall signal is equivalent to the number of primary electrons and,
therefore, proportional to the energy of the incident radiation, is called the
ionization chamber region. At a bias voltage of 600900 V, however, the large
electrostatic force of attraction of the anode accelerates free electrons, as they
travel towards the anode, to sufficiently high speeds to eject additional orbital
electrons (i.e. secondary electrons) within the sensitive volume, contributing to
an increasing overall signal the higher the voltage, the more energetic the
electrons and the more secondary electrons are added to the overall signal. The
number of electrons comprising the overall signal is, thus, proportional to the
289
CHAPTER 10
primary number of electrons and the energy of the incident radiation, and the
600900 V range is, therefore, called the proportional counter region. As the bias
voltage is increased further, beyond 900 V (up to 1200 V), free electrons (primary
and secondary) are accelerated to very high speeds and strike the anode with
sufficient energy to eject additional electrons from the anode surface itself. These
tertiary electrons are, in turn, accelerated back to the anode surface and eject even
more electrons, effectively forming an electron cloud over the anode surface and
yielding a constant overall signal even with further increase in the bias voltage.
The 9001200 V range is called the Geiger counter (or GeigerMller) region.
Importantly, the magnitude of the charge represented by this electron cloud is
independent of the number of electrons initiating its formation. Therefore,
in contrast to ionization chamber and proportional counter signals, the Geiger
counter signal is independent of the energy of the incident radiation. Finally,
beyond a bias voltage of 1200 V, atoms within the detector material are ionized
even in the absence of ionizing radiation (i.e. undergo spontaneous ionization),
producing an artefactual signal; the voltage range beyond 1200 V is known as the
spontaneous discharge region.
Although the bias voltage is the principal difference among different types
of gas filled ionization detectors, there may be other differences. The sensitive
volume, for example, may or may not be sealed. Unsealed sensitive volumes
contain only air at atmospheric (ambient) pressure. For detectors with unsealed
volumes, the signal must be corrected by calculation for the difference between
the temperature and pressure at which the detector was calibrated (usually
standard temperature and pressure: 27C and 760 mm Hg, respectively) and
the ambient conditions at the time of an actual measurement. For detectors with
sealed volumes, gases other than air (e.g. argon) may be used and the gas may be
pressurized, providing higher stopping power, and, therefore, higher sensitivity,
than detectors having a non-pressurized gas in the sensitive volume. In addition,
different geometric arrangements of the anode and cathode, such as parallel
plates (used in some ionization chambers), a wire along the axis of a cylinder
(used in Geiger counters), etc., may be used.
The functional properties and, therefore, the applications of the various
types of ionization detector ionization chambers, proportional counters and
Geiger counters are largely dictated by their respective bias voltage dependent
signal (Table 10.1). Ionization chambers are widely used in radiation therapy to
calibrate the output of therapy units and in nuclear medicine as dose calibrators
(i.e. devices used to assay radiopharmaceutical activities). The relatively
low sensitivity of ionization chambers is not a major disadvantage for such
applications, as the radiation intensities encountered are typically rather large.
The stability of the response is an important advantage, however, as it allows
the use of unconditioned AC electrical power (i.e. as provided by ordinary wall
290
outlets). Proportional counters, because of their need for a stable bias voltage
and, therefore, specialized power supplies, are restricted to research applications
(e.g. in radiobiology) where both higher sensitivity and the capability of energy
discrimination may be advantageous. Proportional counters often employ an
unsealed, gas flow-through sensitive volume. Geiger counters, because of their
high sensitivity and stability with respect to voltage (allowing the use of a portable
power supply such as an ordinary battery), are widely used as survey meters to
measure ambient radiation levels and to detect radioactive contamination. For
such applications, sensitivity, and not energy discrimination, is critical. As with
dose calibrators, Geiger counters have sealed sensitive volumes, avoiding the
need for temperaturepressure corrections.
In addition to the more familiar gas filled ionization detectors, solid state
ionization detectors are now available. Such detectors are based on a family of
materials known as semiconductors. The pertinent difference among (crystalline)
solids conductors, insulators and semiconductors is related to the widths of
their respective electron forbidden energy gaps. In a semiconductor, the highest
energy levels occupied by electrons are completely filled but the forbidden
gap is narrow enough (<2 eV) to allow radiative or even thermal excitation at
room temperature, thereby allowing a small number of electrons to cross the
gap and occupy energy levels among the otherwise empty upper energy levels.
Such electrons are mobile and, thus, can be collected by a bias voltage, with the
amplitude of the resulting signal being equivalent to the number of electrons
produced by the radiation and, therefore, proportional to the radiation energy.
Although many semiconductor materials have suitably large energy gaps (~2 eV),
techniques must be available to produce crystals relatively free of structural
defects. Defects (i.e. irregularities in the crystal lattice) can trap electrons
produced by radiation and, thus, reduce the total charge collected, degrading
the sensitivity and overall detector performance of semiconductors. Practical,
reasonably economical crystal growing techniques have been developed
for cadmium telluride (CdTe), cadmium zinc telluride (CZT) and mercuric
iodide (HgI2), and these detectors have been incorporated into commercial
intra-operative gamma probes and, on a limited basis, small field of view gamma
cameras.
291
CHAPTER 10
Proportional
counter
Geiger
counter
300600 V
600900 V
9001200 V
Yes
No
Yes
Sensitivityb
Low
Intermediate
High
Capable of energy
discrimination?c
Yes
Yes
No
Dose calibrator
Research
Survey meter
Applications
a
b
c
The stability with respect to the bias voltage corresponds to a constant signal over the
respective detectors operating voltage range. In contrast to ionization detectors and Geiger
counters, proportional counters are unstable with respect to the bias voltage and, thus,
require specialized, highly stable voltage sources for constancy of response.
The sensitivity of a detector is related to its amplification factor (see Fig. 10.2).
If the total number of electrons comprising the signal is proportional to the number of
electrons directly produced by the incident radiation and, therefore, proportional to its
energy, as in ionization detectors and proportional counters, radiations of different energies
can be discriminated (i.e. separated) on the basis of the signal amplitude.
the even larger positive voltage, ~400 V, on the second dynode. The impact of
these electrons on the second dynode surface ejects an additional three electrons,
on average, for each incident electron. Typically, a PMT has 1012 such dynodes
(or stages), each ~100 V more positive than the preceding dynode, resulting in
an overall electron amplification factor of 310312 for the entire PMT. At the last
anode, an output signal is generated. The irregularly shaped PMT output signal
is then shaped by a preamplifier and further amplified into a logic pulse that
can be further processed electronically. The resulting electrical pulses, whose
amplitudes (or heights) are proportional to the number of electrons produced
at the PMT photocathode are, therefore, also proportional to the energy of the
incident radiation. These pulses can then be sorted according to their respective
heights by an energy discriminator (also known as a pulse height analyser) and
those pulses with a pulse height (i.e. energy) within the preset photopeak energy
window (as indicated by the pair of dashed horizontal lines overlying the pulses
in Fig. 10.3) are counted by a timer/scaler.
Advantageous features of scintillation detectors include:
High electron density (determined by mass density and effective atomic
number Zeff);
High light output;
For certain applications such as PET, speed of light emission.
High mass density and effective atomic number maximize the crystal
stopping power (i.e. linear attenuation coefficient ) and, therefore, sensitivity.
In addition, a higher atomic number crystal will have a higher proportion of
photoelectric than Compton interactions, thus facilitating energy discrimination
of photons which underwent scatter before entering the crystal. High light output
reduces statistical uncertainty (noise) in the scintillation and associated electronic
signal and, thus, improves energy resolution and scatter rejection. Other detector
considerations include:
Transparency of the crystal to its own scintillations (i.e. minimal
self-absorption);
Matching of the index of refraction of the crystal to that of the
photodetector (specifically, the entrance window ( 1.5) of the PMT);
Matching of the scintillation wavelength to the light response of the
photodetector (the PMT photocathode, with maximum sensitivity in the
390410 nm, or blue, wavelength range);
Minimal hygroscopic behaviour.
293
CHAPTER 10
Sensitivity (or efficiency) is the detected count rate per unit activity (e.g. in
counts per minute per megabecquerel). As the count rate detected from a given
activity is highly dependent on the sourcedetector geometry and intervening
media, characterization of sensitivity can be ambiguous. There are two distinct
components of overall sensitivity, geometric sensitivity and intrinsic sensitivity.
Geometric sensitivity is the fraction of emitted radiations which intersect, or strike,
the detector, that is, the fraction of the total solid angle subtended at the detector
by the source. It is, therefore, directly proportional to the radiation-sensitive
detector area and, for a point source, inversely proportional to the square of the
sourcedetector distance. Intrinsic sensitivity is the fraction of radiation striking
the detector which is stopped within the detector. Intrinsic sensitivity is directly
related to the detector thickness, effective atomic number and mass density, and
decreases with increasing photon energy, since higher energy photons are more
penetrating and are more likely to pass through a detector without interacting.
Characteristic X rays and rays are emitted from radioactively decaying
atoms with well defined discrete energies. Even in the absence of scatter, however,
output pulses from absorption of these radiations will appear to originate over a
range of energies, reflecting the relatively coarse energy resolution of the detector.
For this reason, many radiation detectors employ some sort of energy-selective
294
counting using an energy range, or window, such that radiations are only counted
if their detected energies lie within that range (Figs. 10.3 and 10.4(a)). At least
for scintillation detectors, a so-called 20% photopeak energy window, E 10%
of E, (e.g. 126154 keV for the 140 keV ray of 99mTc) is employed, where E
is the photopeak energy of the X ray or ray being counted. For such energyselective counting, overall sensitivity appears to increase as the photopeak energy
window is widened. However, this results in acceptance of more scattered as well
as primary (i.e. unscattered) radiations.
For each radionuclide and energy window (if applicable) for which a
particular detector is used, the detector should be calibrated, that is, its sensitivity
(e.g. in cpm/MBq) S determined, at installation and periodically thereafter:
S=
Rg R b
A 0e t
(10.1)
where
Rg is the gross (i.e. total) count rate (cpm) of the radionuclide source (RS);
Rb is the background (BG), or blank, count rate (cpm);
A0 is the activity (MBq) of the radionuclide source at calibration;
is the physical decay constant (in month1 or a1, depending on the half-life)
of the calibration radionuclide;
and t is the time interval (in months or years, respectively, again depending
on the half-life) between the calibration of the radionuclide and the current
measurement.
As noted, sensitivity is highly dependent on the sourcedetector counting
geometry (including the size and shape of the source and the sourcedetector
distance), and the measured value, thus, applies exactly only for the geometry
used for the measurement.
10.3.2. Energy resolution
Energy resolution quantifies the ability of a detector to separate, or
discriminate, radiations of different energies. As illustrated in Fig. 10.4(b),
energy resolution is generally given by the width of the bell shaped photopeak,
specified as the full width at half maximum (FWHM = E) height expressed as
a percentage of the photopeak energy E, FWHM (%) =
E
100%. It is related
E
295
CHAPTER 10
Output
signal
Photomultiplier tube
Anode
+ 1200 V
106:1e
+ 1000 V
+ 600 V
Dynodes
Photocathode
Entrance
window
Reflective
inner surface
of crystal housing
Amplifier
+ 1100 V
Magnetic
shielding
Focusing
grid
Preamplifier
+ 500 V
High
voltage
supply
106
+ 400 V
+ 300 V
Energy
discriminator
Energy E0
Timer/scaler
Light
photon
Light pipe
Scintillator
crystal
Display
X or ray
FIG.10.3. The basic design and operating principle of photomultiplier tubes and scintillation
detectors.
20% photopeak
energy window
(a)
10
8
6
4
Primary photons
Scattered photons
Total photons
2
60
80 100 120
Photon energy E (keV)
140
137Cs
160
80
Maximum height
60
FWHM(%) =
40
DE
46
=
100 = 7%
Eg
662
20
100
200
300
400
500
600
Photon energy E (keV)
E = 46 keV
100
40
662 keV
20
(b)
140 keV
99mTc
FWHM
maximum
height
700
FIG.10.4. (a) Energy spectrum for the 662 keV rays emitted by 137Cs, illustrating the
definition of energy resolution as the percentage full width at half maximum (FWHM) of the
photopeak energy E. (b) Energy spectrum for the 140 keV rays emitted by 99mTc, illustrating
the contributions of primary (unscattered) and scattered radiation counts. In (a) and (b), the
energy spectra were obtained with a thallium-doped sodium iodide (NaI(Tl)) scintillation
detector.
CHAPTER 10
detector, however, even radiation which is not counted (i.e. which interacts
with the detector during the dead time of a previous event) prevents counting
of subsequent incoming radiations during the time interval corresponding to
its dead time. Geiger counters (with quenching gas) behave as non-paralysable
systems but most detectors, including scintillation detector based systems,
such as well counters, cameras and PET scanners, are paralysable. Modern
scintillation detectors generally incorporate automated algorithms to yield count
rates corrected for dead time count losses.
Non-paralysable
Paralysable
FIG.10.5. The observed versus true count rates for paralysable and non-paralysable radiation
detectors. For paralysable detectors, the observed count rate increases to a maximum value
with increasing true count rate (e.g. with increasing activity) and then decreases as the true
count rate is further increased. For non-paralysable detectors, the observed count rate also
increases with increasing true count rate, asymptotically approaching a maximum value as the
true count rate is further increased. In both cases, the maximum observed count rate is directly
related to the detectors dead time .
(i.e. are operated at a relatively low potential difference between the anode
and cathode) and are designed for use where relatively high fluxes of X rays
and rays are encountered. The more familiar Geiger counters are operated at
a high potential difference (Fig. 10.2), providing a high electron amplification
factor and, thus, high sensitivity. Geiger counters are, therefore, well suited for
low level surveys, for example, checking for radioactive contamination. Both
cutie-pies and Geiger counters are generally calibrated in terms of exposure rate.
As an ionization chamber, the cutie-pies electron signal depends on the energy
of the detected X rays or rays and is, therefore, directly related to the exposure
for all radionuclides. For Geiger counters, on the other hand, signal pulses have
the same amplitude regardless of the energy of the incoming radiation. Thus,
Geiger counter calibration results apply only to the particular radionuclide(s)
used to calibrate the counter (see below). Solid state detectors employ a
non-air-equivalent crystal as the detection medium and, thus, cannot measure
exposure rates, only count rates.
10.4.2. Dose calibrator
The dose calibrator, used for assaying activities in radiopharmaceutical
vials and syringes and in other small sources (e.g. brachytherapy sources),
is a pressurized gas filled ionization chamber with a sealed sensitive volume
configured in a well-type geometry. While the intrinsic sensitivity of the dose
calibrator, as that of other gas filled detectors, is relatively low, the well-type
configuration of its sensitive volume provides high geometric efficiency1, making
the overall sensitivity entirely adequate for the relatively high radiopharmaceutical
activities (of the order of 10100 MBq) typically encountered in clinical nuclear
medicine. Dose calibrators are equipped with isotope specific push-buttons
and/or a potentiometer (with isotope-specific settings provided) to adjust for
differences in energy dependent response and to thereby yield accurate readouts
of activity (i.e. kBq or MBq) for any radioisotope.
10.4.3. Well counter
Well counters are used for high sensitivity counting of radioactive
specimens such as blood or urine samples or wipes from surveys of removable
contamination (i.e. wipe testing). Such counting results can be expressed in
1
The solid angle subtended at the centre of a sphere by the total surface of the sphere
is 4 steradians; a steradian is the unit of solid angle. A well-type detector configuration
approximates a point source completely surrounded by a detector, yielding a per cent geometric
efficiency of 100%, and is, therefore, referred to as a 4 counting geometry.
299
CHAPTER 10
terms of activity (e.g. MBq) using the measured isotope specific calibration
factor (cpm/MBq) (see Eq. (10.1)). Such devices are generally comprised of a
cylindrical scintillation crystal (most commonly, NaI(Tl)) with a circular bore
(well) for the sample drilled part-way into the crystal and backed by a PMT and
its associated electronics. An alternative design for well counters is the so-called
through-hole detection system in which the hole is drilled through the entire
crystal. The through-hole design facilitates sample exchange, and because
samples are centred lengthwise in the detector, yields a more constant response
for different sample volumes as well as slightly higher sensitivity than the well
counters. In both the well and through-hole designs, the crystal is surrounded by
thick lead shielding to minimize the background due to ambient radiation.
Scintillation counters are often equipped with a multichannel analyser
for energy (i.e. isotope) selective counting and an automatic sample changer
for automated counting of multiple samples. Importantly, because of their high
intrinsic and geometric efficiencies (resulting from the use of a thick crystal and
a well-type detector configuration, respectively), well counters are extremely
sensitive and, in fact, can reliably be used only for counting activities up to
~100 kBq; at higher activities, and even with dead time corrections applied,
dead time counting losses may still become prohibitive and the measured counts
inaccurate. Modern well counters often include an integrated computer which is
used to create and manage counting protocols (i.e. to specify the isotope, energy
window, counting interval, etc.), manage sample handling, and apply background,
decay, dead time and other corrections, and, thus, yield dead time-corrected net
count rate decay corrected to the start of the current counting session.
10.4.4. Intra-operative probes
Intra-operative probes (Fig. 10.6), small hand-held counting devices,
are now widely used in the management of cancer, most commonly to more
expeditiously identify and localize sentinel lymph nodes and, thereby, reduce
the need for more extensive surgery as well as to identify and localize visually
occult disease at surgery following systemic administration of a radiolabelled
antibody or other tumour-avid radiotracer. Although intra-operative probes have
been used almost exclusively for counting X rays and rays, beta (electron and
positron) probes constructed with plastic scintillators have also been developed.
In addition, small (~10 cm) field of view intra-operative cameras have recently
become available. Intra-operative probes are available with either scintillation
or semiconductor (ionization) detectors. Scintillation detector based probes have
the advantages of relatively low cost and high sensitivity (mainly because of their
greater thickness, ~10 mm versus only ~1 mm in ionization detectors), especially
for medium to high energy photons.
300
Side view
Detector (crystal)
Detector thickness
Detector width
Collimator length
Collimator
End view
FIG.10.6. A typical intra-operative probe (Node Seeker 900, Intra Medical Imaging LLC,
Los Angeles, CA, United States of America). (a) Hand-held detector. (b) Control and display
unit which not only displays the current count rate but also often emits an audible signal, the
tone of which is related to the count rate, somewhat analogous to the audible signal produced
by some Geiger counters. (c) A diagram of the detector and collimator assembly of a typical
intra-operative probe, illustrating that the detector (crystal) is recessed from the collimator
aperture. (Courtesy of Intra Medical Imaging LLC, Los Angeles, CA, USA.)
CHAPTER 10
date, the few clinical studies directly comparing scintillation and semiconductor
intra-operative probes have not provided a clear choice between the two types of
probe.
10.4.5. Organ uptake probe
Historically, organ uptake probes have been used almost exclusively for
measuring thyroid uptakes and are, thus, generally known as thyroid uptake
probes.2 Thyroid uptake (i.e. the decay-corrected per cent of administered activity
in the thyroid) may be measured following oral administration of 131I-iodide,
123
I-iodide or 99mTc-pertechnetate. The uptake probe is a radionuclide counting
system consisting of a wide-aperture, diverging collimator, a NaI(Tl) crystal
(typically ~5 cm thick by ~5 cm in diameter), a PMT, a preamplifier, an amplifier,
an energy discriminator (i.e. an energy window) and a gantry (stand) (Figs 10.7(a)
and (b)). Commercially available thyroid uptake probes are generally supplied as
integrated, computerized systems with automated data acquisition and processing
capabilities, yielding results directly in terms of per cent uptake.
Each determination of the thyroid uptake includes measurement of the
thyroid (i.e. neck) count rate, the thigh background count rate (measured
over the patients thigh and presumed to approximate the count contribution of
extra-thyroidal neck activity), the standard count rate (often counted in a neck
phantom simulating the thyroid/neck anatomy) and the ambient (i.e. room)
background, with a 15 min counting interval for each measurement. Based
on the foregoing measurements, and knowing the fraction of the administered
activity which is in the standard, the thyroid uptake is calculated as follows:
uptake (%) =
F 100% (10.2)
where
C
t
At one time, organ uptake probes were also used to measure kidney timeactivity data
for the evaluation of renal function. In addition, organ uptake probes have been adapted to such
well counter applications as counting of blood samples and wipes.
302
FIG. 10.7. (a) A typical organ (thyroid) uptake probe system, including an integrated
computer, set-up for a thyroid uptake measurement (AtomLab 950 Thyroid Uptake System,
Biodex Medical Systems, Shirley, NY, USA). The rather large neck to collimator aperture
distance (typically of the order of 30 cm) should be noted. Although this reduces the overall
sensitivity of the measurement of the neck count rate, it serves to minimize the effect of the
exact size, shape and position of the thyroid, and the distribution of radioisotope within the
gland. (b) A diagram (side view) of the open, or flat-field, diverging collimator typically used
with thyroid uptake probes. (Courtesy of Biodex Medical Systems, Inc, Shirley, NY, USA.)
303
CHAPTER 10
t A t B t P t B
100% (10.3)
Total body activity (%) =
1/2
where
A and P are the anterior and posterior total body counts, respectively;
B
is the room (background) counts;
tA, tP and tB are the counting intervals for anterior, posterior and room counts,
respectively;
and (0) indicates the same quantities at time zero.
As above, the total body activity may be corrected for radioactive decay
from the time of measurement to the time of administration (by multiplying
the right side of Eq. (10.3) by et where is the physical decay constant of the
administered isotope and t is the administration to measurement time interval).
304
CHAPTER 10
306
= A 0e
D
(10.4)
2
d
where
A0 is the activity of the reference source at calibration;
is the physical decay constant of the radionuclide comprising the reference
source;
Dt is the time interval between the calibration of the reference source and the
current measurement;
Gd is the air kerma rate constant (the subscript d indicates that only photons
with energies greater than d, typically set at 20 keV, are included) of the
radionuclide comprising the reference source;
and d is the distance between the reference source and the meter (Table 10.2).
The dose rates should be measured on each scale and, by appropriate
adjustment of the sourcemeter distance, with two readings (~20% and ~80% of
the maximum) on each scale. For all readings, the expected and measured dose
rates should agree within 10%.
Many nuclear medicine facilities have their survey meters calibrated by
the institutional radiation safety office or by a commercial calibration laboratory.
In addition to a calibration report (typically, a one page document) specifying
the reference source(s) used, the measurement procedure, and the measured
and expected exposure rates, a dated sticker summarizing the calibration results
should be affixed to the meter itself.
10.5.3. Dose calibrator
Among routine dose calibrator QC tests3, constancy must be checked
daily and accuracy and linearity at least quarterly; daily checks of accuracy
are recommended. For the constancy test, an NIST-traceable reference source,
such as 57Co, 68Ge or 137Cs (Table 10.2), is placed in the dose calibrator and the
3
At the installation of a dose calibrator, the geometry dependent response for 99mTc
must be measured and volume dependent (225 mL) correction factors relative to the standard
volume (e.g. 10 mL) derived. This procedure is required periodically following installation.
307
308
30 a
287 d
272 d
Half-life
0.0231/a
0.00241/d
0.00254/d
Photopeak energy E
and frequency
of principal
X ray or ray
82.1
129
14.1
Quality control
application
Well counter
constancy and accuracy
Dose calibrator
accuracy and constancy
Well counter
constancy and accuracy
Dose calibrator
accuracy and constancy
Well counter
constancy and accuracy
Dose calibrator
accuracy and constancy
Geometry
and activity
Test tube-size rod,
~37 kBq
Vial/small bottle,
185370 MBq
Test tube-size rod,
37 kBq
Vial/small bottle,
185370 MBq
Test tube-size rod,
37 kBq
Vial/small bottle,
185370 MBq
The air kerma rate constant is equivalent to the older specific ray constant .
Germanium-68 in a sealed source is in secular equilibrium with its short lived, positron emitting daughter 68Ga (half-life: 68 min).
Cs
137
Geb
68
Co
57
Radionuclide
Physical
decay
constant
CHAPTER 10
activity reading on each scale recorded; day to day readings should agree within
10%. for the accuracy test (sometimes also known as the energy linearity test), at
least two of the foregoing NisT-traceable reference sources are separately placed
in the dose calibrator and the activity reading on each scale recorded. for each
source, the measured activity on each scale and its current actual activity should
agree within 10%.
FIG. 10.8. Set of lead-lined plastic sleeves (CalicheckTM Dose Calibrator Linearity Test Kit,
Calicheck, Cleveland, OH, USA) for evaluation of dose calibrator linearity by the shield
method. The set is supplied with a 0.64 cm thick lead base, a colour coded unlined sleeve (to
provide an activity measurement equivalent to the zero time point measurement of the decay
method) and a six colour coded lead-lined sleeve providing attenuation factors nominally
equivalent to decay over 6, 12, 20, 30, 40 and 50 h, respectively. (Courtesy of Calicheck,
Cleveland, OH, USA.)
CHAPTER 10
the set of lead sleeves) between the source and the dose calibrators sensitive
volume, a decay-equivalent activity is measured for each sleeve. While the shield
method is much faster than the decay method for checking linearity (taking
minutes instead of days), an initial decay based calibration of the set of sleeves
is recommended to accurately determine the actual decay equivalence of each
shield.
10.5.4. Well counter
The routine QC tests for well counters include checks of the photopeak
energy window (i.e. energy peaking) if the counter is equipped with a
multichannel analyser, background, constancy and efficiency (or sensitivity).
Prior to counting samples containing a particular radionuclide, the energy
spectrum should be checked to verify that the counter is properly peaked,
that is, that the radionuclides photopeak coincides with the preset photopeak
energy window4. For each photopeak energy window used, the background
count rate should be checked daily. Importantly, electronic noise as well as
ambient radiation levels, which may be relatively high and variable in a nuclear
medicine facility, will produce a non-zero and potentially fluctuating background
count rate. Furthermore, even trace contamination of the counting well will
produce inaccurately high count rate values. Accordingly, a blank (i.e. an
empty counting tube or vial) should always be included to determine the current
background count. To check constancy, at least one NIST-traceable reference
source (Table 10.2) should likewise be counted each day; day to day net (i.e. gross
minus background) count rates should agree within 10%.
In addition, as noted above, for each radionuclide for which a particular
well counter is used, the counter should be calibrated that is, its efficiency
(sensitivity) (in cpm/kBq) determined at installation, annually and after any
repair (Eq. (10.1)).
10.5.5. Intra-operative probe
In addition to daily battery and background checks (as done for survey
meters), QC tests of intra-operative probes should include a daily bias check
for both the primary and any backup battery to verify that bias voltage (or
high voltage) is within the acceptable range. As intra-operative probes may not
provide a display of the energy spectrum, it may not be possible to visually check
4
310
that the probe is properly peaked, that is, that the photopeak coincides with the
preset photopeak energy window. The lower counts or count rates resulting from
an inappropriate energy window may, therefore, go unnoticed. Thus, a long lived
reference source or set of reference sources (such as 57Co, 68Ge and/or 137Cs
(Table 10.2)) should be available for daily checks of count rate constancy; a
marked change (e.g. >10%) in the net count rate from one day to the next may
indicate an inappropriate energy window setting or some other technical problem.
Ideally, the reference sources should each be incorporated into some sort of cap
that fits reproducibly over the probe so that spurious differences in count rates
due to variations in sourcedetector geometry are avoided.
10.5.6. Organ uptake probe
Aside from differences in counting geometry and sensitivity, uptake probes
and well counters actually have very much in common and the QC procedures
checks of the photopeak energy window, background, constancy and
efficiency are, therefore, analogous. Importantly, however, efficiency should
be measured more frequently for each patient than for a well counter, so
that the probe net count rates can be reliably converted to thyroid uptakes for
individual patients.
BIBLIOGRAPHY
CHERRY, S.R., SORRENSON, J.A., PHELPS, M.E., Physics in Nuclear Medicine, 3rd edn,
Saunders, Philadelphia, PA (2003).
NINKOVIC, M.M., RAICEVIC, J.J., ANDROVIC, A., Air kerma rate constants for emitters
used most often in practice, Radiat. Prot. Dosimetry 115 (2005) 247250.
ZANZONICO, P., Routine quality control of clinical nuclear medicine instrumentation: A brief
review, J. Nucl. Med. 49 (2008) 11141131.
ZANZONICO, P., HELLER, S., Physics, instrumentation, and radiation protection, Clinical
Nuclear Medicine (BIERSACK, H.J., FREEMAN, L.M., Eds), Springer Verlag, Berlin
Heidelberg (2007) 133.
311
CHAPTER 11
NUCLEAR MEDICINE IMAGING DEVICES
M.A. LODGE, E.C. FREY
Russell H. Morgan Department of Radiology and Radiological Sciences,
Johns Hopkins University,
Baltimore, Maryland, United States of America
11.1. INTRODUCTION
Imaging forms an important part of nuclear medicine and a number of
different imaging devices have been developed. This chapter describes the
principles and technological characteristics of the main imaging devices used
in nuclear medicine. The two major categories are gamma camera systems and
positron emission tomography (PET) systems. The former are used to image
rays emitted by any nuclide, while the latter exploit the directional correlation
between annihilation photons emitted by positron decay. The first section of this
chapter discusses the principal components of gamma cameras and how they
are used to form 2D planar images as well as 3D tomographic images (single
photon emission computed tomography (SPECT)). The second section describes
related instrumentation that has been optimized for PET data acquisition. A major
advance in nuclear medicine was achieved with the introduction of multi-modality
imaging systems including SPECT/computed tomography (CT) and PET/CT. In
these systems, the CT images can be used to provide an anatomical context for
the functional nuclear medicine images and allow for attenuation compensation.
The third section in this chapter provides a discussion of the principles of these
devices.
11.2. GAMMA CAMERA SYSTEMS
11.2.1. Basic principles
The gamma camera, or Anger camera [11.1], is the traditional workhorse of
nuclear medicine imaging and its components are illustrated in Fig.11.1. Gamma
camera systems are comprised of four basic elements: the collimator, which
defines the lines of response (LORs); the radiation detector, which counts incident
photons; the computer system, which uses data from the detector to create 2D
312
rays
Collimator
Aluminium cover
Scintillation crystal
Reflector
Exit window
Photomultiplier tube array
Preamplifiers
Shaping and positioning
histogram images of the number of counted photons; and the gantry system, which
supports and moves the gamma camera and patient. The overall function of the
system is to provide a projection image of the radioactivity distribution in the
patient by forming an image of rays exiting the body. Forming an image means
establishing a relationship between points on the image plane and positions in the
object. This is sometimes referred to as an LOR: ideally, each position in the
image provides information about the activity on a unique line through the object.
In gamma cameras, single photons are imaged, in contrast to PET where pairs of
photons are imaged in coincidence. Thus, in order to define an LOR, a lens is
required, just as in optical imaging. However, the energies of rays are so high
Profile through image
Image plane
Collimator
Point sources
FIG.11.2. The drawing on the left demonstrates the image of two point sources that would
result without the collimator. It provides very little information about the origin of the photons
and, thus, no information about the activity distribution in the patient. The drawing on the
right illustrates the role of the collimator and how it defines lines of response through the
patient. Points on the image plane are uniquely identified with a line in space.
313
CHAPTER 11
that bending them is, for all practical purposes, impossible. Instead, collimators
are used to act as a mechanical lens. The function of a collimator is, thus, to define
LORs through the object. Figure11.2 illustrates the function of and need for the
collimator, and, thus, the basic principle of single photon imaging.
11.2.2. The Anger camera
11.2.2.1. Collimators
As mentioned above, the collimator functions as a mechanical lens: it
defines LORs. The collimator accomplishes this by preventing photons emitted
along directions that do not lie along the LOR from reaching the detector. Thus,
collimators consist of a set of holes in a dense material with a high atomic
number, typically lead. The holes are parallel to the LORs. Ideally, each point in
the object would contribute to only one LOR. This requires the use of collimator
holes that are very long and narrow. However, such holes would allow very few
photons to pass through the collimator and be detected. Conversely, increasing
the diameter or decreasing the length of the holes results in a much larger range
of incident angles passing through the collimator. As illustrated in Fig.11.3, this
results in degraded resolution. As can be seen from this figure, each hole has a
cone of response and the diameter of the cone of response is proportional to the
distance from the face of the collimator.
As discussed above, changing the diameter of collimator holes changes
the resolution and also the number of photons passing through the collimator.
The noise in nuclear medicine images results from statistical variations in the
number of photons counted in a given counting interval due to the random nature
of radiation decay and interactions with the patient and camera. The noise is
described by Poisson statistics, and the coefficient of variation (per cent noise)
is inversely proportional to the square root of the number of counts. Thus,
increasing the number of counts results in less noisy images. As a result, there
is an inverse relationship between noise and spatial resolution for collimators:
improving the resolution results in increased image noise and vice versa.
Another important characteristic of collimators is the opacity of collimator
septa to incident rays. In an ideal collimator, the septa would block all incident
radiation. However, in real collimators, some fraction of the radiation passes
through or scatters in the septa and is detected. These phenomena are referred
to as septal penetration and scatter. The amount of septal penetration and scatter
depends on the energy of the incident photon, the thickness and composition of
the septa, and the aspect ratio of the collimator holes. Since gamma cameras are
used to image radionuclides with energies over a wide range, collimators are
typically available that are appropriate for several energy ranges: low energy
314
collimators are designed for isotopes emitting photons with energies lower than
approximately 160keV; medium energy collimators are designed for energies up
to approximately 250keV; and high energy collimators are designed for higher
energies. It should be noted that in selecting the appropriate collimator for an
isotope, it is important to consider not only the photon energies included in the
image, but also higher energy photons that may not be included in the image.
For example, in 123I there are a number of low abundance high energy photons
that can penetrate through or scatter in septa and corrupt the images. As a result,
medium energy collimators are sometimes used for 123I imaging, despite the main
photopeak at 159keV.
Point sources
Collimator
holes
FIG.11.3. Illustration of the concept of spatial resolution and how collimator hole length
and diameter affect spatial resolution. The lines from the point source through the collimator
indicate the furthest apart that two sources could be and still have photons detected at the
same point on the image plane (assumed to be the back of the collimator). Thus, sources closer
together than this would not be fully resolved (though they might be partially resolved). From
this, we see that the resolution decreases as a function of distance. It should also be noted that
the resolution improves proportionally with a reduction in the width of the collimator holes
and improves (though not proportionally) with the hole length.
CHAPTER 11
sensitivity is relatively low because there is less open area for a given resolution
and septal thickness. Hexagonal hole collimators are the most common design for
gamma cameras using continuous crystals. They have the advantage of relatively
direction independent response functions and higher sensitivity than a round hole
collimator with the same resolution and septal thickness. Square hole collimators
are especially appropriate for detectors that have pixelated crystals. Having
squares holes that match the spacing and inter-gap distance of these detectors
results in good sensitivity with these detectors. However, the resolution varies
significantly depending on the direction, being worse by a factor of a 2 along
the diagonal direction compared to parallel to the rows of holes.
FIG.11.4. Examples of the three major hole shapes used in multi-hole collimators. They are,
from left to right: round, hexagonal and square holes. In all cases, black indicates septa and
white open space. The diameter (often called flat-to-flat for square and hexagonal holes) is d
and the septal thickness is s.
Foil collimators are created from thin lead foils. The foils are stamped and
then glued together to build up the collimator layer by layer. Figure11.5 shows a
schematic of how two layers are stamped and glued to form the holes. It should be
noted that in the stamping the septa that are glued must be thinner than the other
walls in order to retain uniform septal thickness and, thus, maximize sensitivity.
It is clear that precise stamping, alignment and gluing is essential to form a high
quality collimator. The septa in foil collimators can be made thinner than in cast
collimators. As a result, foil fabrication techniques are especially appropriate
for low energy collimators. Understanding the fabrication technology can help
in diagnosing problems with the collimator. For example, Fig.11.6 shows an
image with non-uniformities appearing as vertical stripes in the image of a sheet
source. This was a foil collimator and the non-uniformities apparently originated
from fabrication problems, resulting in some layers having different sensitivities
compared to other layers.
FIG.11.5. Illustration of fabrication of foil collimator by gluing two stamped lead foils. It
should be noted that the foils must be stamped so that the portions of the septa that are glued
are half the thickness of the rest of the septa. Furthermore, careful alignment is essential to
preserve the hole shapes.
FIG.11.6. Uniformity image of a defective foil collimator. The vertical stripes in the image
result from non-uniform sensitivity of the collimator due to problems in the manufacturing
process. The peppery texture is due to quantum noise and is visible because the grey level was
expanded to demonstrate the non-uniformity artefacts.
317
CHAPTER 11
Focal point
Lines of response
Lines of response
Imaging plane
Lines of response
Pinhole
Lines of response
Imaging plane
Imaging plane
Imaging plane
Focal point
FIG.11.7. Illustration of the four common collimator geometries: (left to right) parallel,
converging, diverging and pinhole.
collimator penetration and scatter. This makes these collimators appropriate for
imaging radionuclides emitting high energy rays such as 131I. In addition, pinhole
collimators with changeable apertures can have different diameter pinholes. This
allows selection of resolution/sensitivity parameters relatively easily.
The collimator properties can be most completely described by the collimator
point source response function (PSRF), the noise-free image of a point source in
air with unit activity using an ideal radiation detector, as a function of position
in the object space. The shape of the collimator PSRF completely describes the
resolution properties, and when normalized to unit total volume is referred to as
the collimator point spread function (PSF).
Figure11.8 shows some sample collimator PSFs for an 131I point source
imaged with a high energy general purpose collimator and a medium energy
general purpose collimator. These PSFs are averaged over the position of the
source with respect to the collimator and, thus, do not show the hole pattern. There
are several things to note from this figure. First, using a properly designed
collimator reduces septal scatter and penetration to very low levels, while they
become significant for a collimator not designed to handle the high energies of
131
I. Second, the response function becomes wider as a function of distance,
demonstrating the loss of resolution as a function of increasing source to
collimator distance. Finally, there is evidence of the shape of the holes, which, in
this case, were hexagonal. The shape can be barely discerned in the shape of the
central portion of the response, which is due to photons passing through the
collimator holes. The geometry of the collimator is more evident in the septal
FIG.11.8. Sample images of the point spread function for an 131I point source at (left to right)
5, 10 and 20 cm from the face of a high energy general purpose collimator (top row) and
a medium energy general purpose collimator (bottom row). The images are displayed on a
logarithmic grey scale to emphasize the long tails of the point spread function resulting from
septal penetration and scatter. The brightness of the image has been increased to emphasize
the septal penetration and scatter artefacts.
319
CHAPTER 11
penetration and scatter artefacts. In fact, the septal penetration is highest along
angular directions where the path through the septa is thinnest, giving rise to the
spoke-like artefacts in the directions perpendicular to the walls of the hexagonal
holes.
Another useful way to describe and understand the resolution properties of
the collimator is in terms of its frequency response. This can be described by the
collimator modulation transfer function, which is the magnitude of the Fourier
transform of the collimator PSF. Figure11.9 shows some sample profiles through
the collimator modulation transfer function. It should be noted that the collimator
response does not pass high frequencies very well and, for some frequencies,
the response is zero. This attenuation of high frequencies results in a loss of
fine detail (i.e. spatial resolution) in the images. Finally, the cut-off frequency
decreases with distance from the collimator and different collimator designs have
different frequency responses.
FIG.11.9. Sample profile through the geometric modulation transfer functions (MTFs) for
low, medium and high energy (HE, ME and LE, respectively) general purpose (GP) and high
resolution (HR) collimators for a source 5 cm (left) and 20 cm (right) from the face of the
collimator.
FWHM
1.0
0.8
0.6
Half maximum
0.4
0.2
0.0
FWTM
-2
-1.5
-1
Tenth maximum
-0.5 0
0.5
Distance (cm)
1.5
FIG.11.10. Plot of the total collimatordetector point spread function for a medium energy
general purpose collimator imaging 131I, indicating the positions of the full width at half
maximum (FWHM) and the full width at tenth maximum (FWTM).
2.5
HEGP
MEGP
LEGP
LEHR
FWHM (cm)
2.0
1.5
1.0
0.5
0.0
10
15
20
25
30
35
321
CHAPTER 11
d
(Z + L + B) (11.1)
L
Thus, it can be seen that, as described above, the FWHM is linearly related
to the distance from the face of the collimator and is proportional to that distance
when the distance is large compared to L + B.
The resolution of the collimatordetector system depends on both the
resolution of the collimator and the intrinsic resolution of the gamma camera. For
continuous-crystal gamma cameras, the intrinsic resolution can be modelled with
a Gaussian function. In this case, the total response function for the collimator
detector is the convolution of the intrinsic resolution and the collimator response.
If the collimator response is approximated by a Gaussian function, the FWHM is
given by the Pythagorean sum of the intrinsic and collimator FWHMs:
2
2
FWHM total = FWHM collimator
+ FWHM intrinsic
(11.2)
Collimator
Point
source
Image plane
FIG.11.12. Diagram illustrating the collimator geometry used to derive the expression for the
full width at half maximum.
The integral of the collimator PSRF gives the sensitivity of the collimator.
This is, in principle, a dimensionless quantity which gives the fraction of emitted
photons that pass through the collimator, and is of the order of 103104 for
typical nuclear medicine collimators. It is often useful to express the sensitivity
in terms of counts per unit activity per unit time, for example counts per second
per megabecquerel. For a parallel-hole collimator, the sensitivity is a function
of two terms: the solid angle of the hole, which is a function of (d/L)2, and the
fraction of the active area of the collimator that is open area (hole) as compared
to septa. The second term can be described in terms of the unit cell: the smallest
geometric region that can be used to form the entire collimator by a set of simple
translations. The sensitivity S of a parallel-hole collimator is given by:
S=
aopen aopen
4 L2 a total
(11.3)
where
aopen is the open area of a collimator unit cell, i.e. the area of the hole itself;
and atotal is the total area of the cell including the part of the collimator septa lying
in the unit cell.
323
CHAPTER 11
From the above it can be seen that, for a parallel-hole collimator, the
sensitivity is independent of the distance to the collimator face. This is because
there is a balance between the decrease in sensitivity from a single hole and the
increase in the number of holes through which photons can pass as a function
of increasing distance from the collimator face. It should also be noted that
aopen is proportional to d2, which is proportional to the FWHM resolution. The
term aopen/atotal varies slowly as a function of d if d s. Thus, the sensitivity is
proportional to the square of the resolution:
S = k FWHM 2 (11.4)
where the constant k depends only weakly on the FWHM. Since noise is directly
related to the number of counts, there is a fundamental trade-off between
resolution and noise. From the above, it is also evident that maximizing the ratio
of aopen/atotal is important in terms of reducing noise for a given resolution.
11.2.2.2. Scintillation crystals
The scintillation crystal in the gamma camera converts ray photons
incident on the crystal into a number of visible light photons. The characteristics
and principle of scintillation radiation sensors are described in more detail in
Chapter6. Ideally, the crystal would be dense and of a high Z material in order
to stop all incoming rays with photoelectric events. It should have high light
output to provide low quantum noise for energy and position estimation. The
decay time of the light output needs to be fast enough to avoid a pile-up of
pulses at the count rates experienced in nuclear medicine imaging procedures.
The wavelength spectrum of the scintillation photons should be matched to the
sensitivity of the photodetectors used to convert the scintillation signal into an
electrical signal. In addition to these technical properties that directly affect
image quality, there are a number of desirable material properties that influence
the cost of the device. These include the cost of the raw material, the ease of
growing large single crystals and the sensitivity to environmental factors such as
humidity. Owing to its unique combination of desirable properties, the crystals
used in gamma cameras based on photomultiplier tubes (PMTs) are typically
made of NaI(Tl). Gamma cameras based on solid state photodetectors require
a different light spectrum and typically CsI(Tl) is used in these devices. The
scintillation properties of these materials are discussed in detail in Chapter6.
As will be described below, the interaction position of the ray with the
detector is estimated based on the distribution of the scintillation light to an
array of PMTs. It is important that the distribution of light be as independent as
possible of the depth of interaction in the crystal and depends in a predictable
324
325
CHAPTER 11
where
Si
Intrinsic sensitivity
Crystal thickness
FIG.11.13. Plot of the intrinsic sensitivity of a NaI scintillation crystal as a function of energy
for several crystal thicknesses.
326
A final important property of the scintillation crystal is the light output. This
is a characteristic of the scintillator material, and is the number of scintillation
photons per unit energy deposited in the crystal by a photon. Thus, the total light
is proportional to the energy deposited in the crystal, and can, therefore, be used
to estimate the energy of the ray. The number of scintillation photons produced
for a given event is a Poisson random variable. Thus, the larger the number of
scintillation photons the smaller the coefficient of variation (standard deviation
divided by the mean) of the mean number of photons, and, hence, the estimated
photon energy. Thus, scintillators with high light output will provide higher
energy resolution. In addition, as will be seen below, the light distribution over
the photodetector array is used to estimate the interaction position. Since the light
collected by each element in the array is also a Poisson random variable that is
proportional to the light output, a larger light output will result in higher precision
in the estimated position, and, thus, improved intrinsic spatial resolution. One
reason that NaI(Tl) is used in gamma cameras is its high light output.
11.2.2.3. Photodetector array
The next element in the radiation detector is the photodetector array.
This array measures the distribution of scintillation photons incident on the
array and converts it into a set of pulses whose charge is proportional to the
number of scintillation photons incident on each corresponding element in
the array. As described below, the output of this array is used to compute the
interaction position of the ray in the scintillator. In clinical gamma cameras, the
photodetector array is comprised of a set of 3090 PMTs arranged in a hexagonal
close packed arrangement, as illustrated in Fig.11.14. More details on the
operation and characteristics of PMTs are provided in Chapter6. In brief, PMTs
have the advantage that they are very well understood, have a moderate cost, are
relatively sensitive to low levels of scintillation light and have a very high gain. In
some commercial designs, PMTs have been replaced by semiconductor detectors
such as photodiodes. Generally, these devices are somewhat less sensitive and
have a lower gain than PMTs, resulting in more noise in the charge signal and,
thus, less precision in the energy and position estimated from the charge signal.
Since the position and energy are estimated from the set of charge signals
from the elements in the photodetector array, it is highly desirable that the
proportionality constants relating light intensity to charge be the same for all
of the photodetectors. This can be ensured by choosing matching devices and
by carefully controlling and matching the electronic gain. For PMTs, the gain is
controlled by the bias voltage applied to the tubes. Since gain is also a function of
temperature, the temperature of the photodetectors must be carefully controlled.
The gains of PMTs are very sensitive to magnetic fields, even those as small
327
CHAPTER 11
as the Earths magnetic field. Thus, the PMTs must be magnetically shielded
using mu-metal. Finally, since the gains of tubes can drift over time, periodic
recalibration is necessary.
One of the major advantages of the gamma camera is that the number of
PMTs is much smaller than the number of pixels in images from the gamma
camera. In other words, in contrast to semiconductor detectors where a separate
set of electronics is required for each pixel, the gamma camera achieves a great
reduction in cost and complexity by estimating the interaction position of the
ray based on the output of the array of PMTs.
To understand the position estimation process, Fig.11.15 is considered.
This figure shows a cross-section through two PMTs, and the crystal and exit
window. The number of photons collected by a PMT directly (i.e. without
reflection) will be proportional to the solid angle subtended by it at the interaction
point. As can be seen in the figure, the interaction position is offset to the right,
and there is a smaller solid angle subtended by PMT 1 than by PMT 2. Thus, the
signal from PMT 1 will be smaller than for PMT 2. If the interaction position
moves to the left, so that it lies along the line separating the two PMTs, there will
be an equal amount of light collected by each PMT. The relationship between the
light collected by the two PMTs and the lateral interaction position can be used
to estimate the interaction position, as will be described in more detail below. In
addition, the total scintillation light collected by all of the PMTs is proportional
to the energy deposited by the ray in the crystal. Thus, the total charge can be
used to estimate the energy of the photon.
FIG.11.14. Cross-section of a gamma camera at the back face of the entrance window
showing the hexagonal close packed array of photomultiplier tubes. The dotted line indicates
the approximate region where useful images can be obtained.
328
Exit window
Crystal
Interaction position
FIG.11.15. Cross-section through two photomultiplier tubes (PMTs), the exit window and
crystal in a gamma camera. The interaction position of a ray photon is indicated. The solid
angles subtended by PMT 1 and 2 are 1 and 2, respectively.
CHAPTER 11
perform this function. This involves digitizing the output waveform from the
preamplifier. This has a number of advantages including providing the ability to
change the trade-off between energy resolution and count rate, depending on the
requirements of the particular imaging procedure. In addition, this method also
provides digital estimates of the pulse heights that can be used in digital position
and energy estimation algorithms.
11.2.2.5. Position and energy estimation
The goal of the radiation detector is to provide an estimate of the energy
and interaction position of each ray incident on the detector. The output of
the photodetector array and amplifier system is a set of voltage signals for each
photon. The sum of these voltages is proportional to the gamma camera energy
and the position is a function of the set of voltage values. The position and energy
estimation circuits estimate the ray energy and position from the set of voltage
values from the photodetector array.
One way of doing this is to use a resistive network to divide the signals
from the array elements among a set of four signals often referred to as X+, X, Y+
and Y, as illustrated in Fig.11.16. The resistor values for each PMT are chosen
so that the charge is divided in proportion to its position with respect to the centre
of the array. For example, for a PMT in the centre horizontally, the resistances
for X+ and X would be equal. Similarly, for a PMT in the centre vertically, the
resistances for Y+ and Y would be equal.
Using the scheme described above, the energy E can be computed using:
E = X + + X + Y+ + Y (11.6)
However, one limitation of this method is that the total amount of light
collected is dependent on position. For example, if the interaction is directly
under a PMT, a larger fraction of the total light will be collected, resulting in a
larger value of E than if the interaction is in the gap between PMTs. This means
that the estimate of the energy will vary spatially. As discussed below, this has an
impact on camera uniformity. As a result, the energy must be corrected based on
the interaction position.
Under the assumption that the light collected by a PMT is proportional to
the distance from its centre, and with the correct resistor values, the interaction
position, defined by x and y can be computed using:
x=
330
X+ X
E
and y =
Y+ Y
(11.7)
E
331
CHAPTER 11
11.2.2.6. Corrections
As mentioned above, the energy and position estimation are non-ideal,
resulting in errors in energy and position estimates. These errors give rise to
non-uniform sensitivity in the camera. Thus, to obtain clinically acceptable
images, energy, spatial and uniformity corrections are needed. The need for
these corrections is illustrated in Fig.11.17. An image is shown resulting from
a uniform distribution of rays on the camera with the collimator removed,
often referred to as an intrinsic flood image. The substantial non-uniformity, the
presence of edge packing artefacts near the edge of the FOV, and the visibility of
the tube pattern should be noted.
Edge packing artefact
Tube pattern
FIG.11.17. Intrinsic flood image of gamma camera without energy, spatial or sensitivity
corrections.
CHAPTER 11
Counts
400
300
200
100
0
110
Average
2% lower
2% higher
120
130
140
150
Energy (keV)
160
170
Fig.11.18. Sample energy spectrum for 140keV photons for the cases of average, 2% lower
than average and 2% higher than average light collection efficiency. The variation in light
collection efficiency results in a shift of the energy spectrum, which results in non-uniform
sensitivity.
334
FIG.11.19. Intrinsic flood images for a gamma camera having a poor (left) and good
(right) set of corrections applied. It should be noted that the images are windowed so that
the brightness represents a relatively small range of count values in order to amplify the
differences. The peppery texture is due to quantum noise and is to be expected. The quantum
noise is exaggerated because of the windowing used, and can be reduced by acquiring very
high-count flood images.
335
CHAPTER 11
FOV from the camera face, then the irradiation of the camera can be considered
uniform. Since the uniformity of the camera will, in general, vary depending
on the energy of the isotope and energy window used, this correction should
ideally be made for each isotope and energy window used. The count rate for the
acquisition should be within acceptable limits to avoid high count rate effects.
Extrinsic flood images are made using a flood or sheet source. Fillable
flood sources have the advantage that they can be used for any isotope. However,
great care must be made in filling the phantom to remove bubbles, mix the
activity and maintain a constant source thickness. In addition, images must be
obtained for each collimator used with a given isotope. As a result, 57Co sheet
sources are often used to obtain extrinsic flood images. These have the advantage
of convenience but are, strictly speaking, valid for only a single isotope.
One way to take advantage of both approaches is to perform uniformity
corrections using a combination of intrinsic flood images for each isotope
used and an extrinsic flood image obtained for the collimator in question. This
approach assumes that collimator uniformity is independent of energy and can,
thus, be measured with, for example, a 57Co sheet source. Not all equipment
vendors support this approach. Some vendors assume that the energy and linearity
corrections produce uniformity that is energy independent, and they, thus,
recommend only the use of an extrinsic flood image for uniformity correction.
Another approach is to first confirm the uniformity of all collimators via a sheet
source flood image with intrinsic correction for 57Co. Then, an extrinsic flood
image for each isotope used is acquired and used in uniformity correction for that
isotope, assuming that the collimator is sufficiently uniform. The best approach
depends on the characteristics of the individual camera.
11.2.2.7. Image framing
The final step in generating gamma camera images is image framing.
Image framing refers to building spatial histograms of the counts as a function of
position and possibly other variables. This involves several steps, and is typically
done either by microprocessors in the camera or in an acquisition computer.
In this step, position is mapped to the elements in a 2D matrix of pixels. The
relationship between pixel index and physical position is linearly related to the
ratio of the maximum dimension of the FOV of the camera to the number of
pixels, the zoom factor and an image offset. The zoom factor allows enlarging
the image so that an object of interest fills the image. This can be useful, for
example, when imaging small objects. It results in a pixel size in the image that
is a factor of 1/zoom factor as large as in the unzoomed (zoom factor equals
one) image. It should be noted that even though the pixel size is decreased, the
resolution of the image will not necessarily be improved as long as the original
336
pixel size is smaller than the intrinsic resolution. For example, if the native pixel
size is 3.2 and the intrinsic resolution is 4mm, a zoom factor of two will result in
a pixel size of 1.6mm, but the intrinsic resolution will still be 4mm. An image
offset can be used to shift the image, so that an object of interest is in the centre
of the acquired image.
In addition to adding counts to the appropriate pixel spatially, the framing
algorithm performs a number of other important functions. The first is to reject
photons that lie outside of the energy window of interest. This is done to reject
scattered photons. Gamma cameras typically offer the ability to simultaneously
frame images corresponding to more than one energy window. This can be useful
for isotopes having multiple photopeaks, for acquiring data in energy windows
used by scatter compensation algorithms or for acquiring simultaneous images
of two or more radionuclides. Framing software typically enables the summation
of photons from multiple, discontiguous energy windows into one image as well
as simultaneously framing multiple images from different energy windows into
different images. There is often a limited number of energy windows that can
be framed into a single image, and a limit on the number of images that can be
framed at one time. These limits may depend on the image size, especially if the
framing is done by a microprocessor in the camera that has limited memory.
A second important function provided by the framing system is the ability to
obtain a sequence of dynamic images. This means that photons are recorded into
a set of images depending on the time after the start of acquisition. For example,
images could be acquired in a set of images with a frame duration of 10 s. Thus,
for the first 10 s, photons are recorded into the first image; for the second 10 s,
they are recorded into a second image; and so on. Thus, just as multiple images
are obtained in the case of a multi-energy window acquisition, multiple images
are obtained corresponding to a sequential set of time intervals. This is illustrated
in Fig.11.20, where seven dynamic frames are acquired for a time interval T.
Dynamic framing is used for monitoring processes such as kidney function,
gastric emptying, etc. The time frames are often not equal in duration as there
may be more rapid uptake at early times and a later washout phase in which the
change in activity with time is slower. The number or acquisition rate of dynamic
frames is often limited due to constraints in framing memory and this limit can
depend on the image size.
Gated acquisition is similar to dynamic acquisition in that photons are
recorded into a set of images depending on the time they are detected. However,
in gated acquisition, the time is relative to a physiological trigger, such as an
electrocardiogram (ECG) signal that provides a signal at the beginning of each
cardiac cycle. This is appropriate for processes that are periodic. The photons
are counted into a set of frames, each of which corresponds to a subinterval of
the time between the two triggers. For example, the bottom two illustrations
337
CHAPTER 11
338
Dynamic acquisition
339
CHAPTER 11
340
FIG.11.21. (a)A cross-section of a dual head gamma camera capable of acquiring two views
simultaneously. It should be noted in this example that the heads are oriented 180 apart,
although a 90 degree configuration is also possible. SPECT data acquisition requires rotation
of the gamma camera heads about the long axis of the patient as indicated. (b)A transverse
slice with the position of four different camera orientations superimposed to illustrate the
acquisition of multiple angular views.
11.2.3. SPECT
11.2.3.1. Gamma camera SPECT systems
In addition to the software requirements for image reconstruction, SPECT
is associated with hardware requirements that are beyond those needed for
planar imaging. Although various SPECT configurations have been developed,
the most common implementation involves the use of a conventional gamma
camera in conjunction with a gantry that allows rotation of the entire detector
head about the patient [11.2]. The gantry rotation is about the long axis of the
patient (seeFig.11.21) and is typically performed in discrete steps (step and
shoot), although continuous motion may also be supported. During rotation of
the gantry, the patient bed typically does not move, so SPECT data acquisition
is more similar to conventional CT than to spiral CT, in this respect. In this way,
planar views of in vivo radioactivity distribution can be acquired at different
angular orientations and these data can be used to form the projections that are
required for image reconstruction by computed tomography.
341
CHAPTER 11
In principle, rotation of the gamma camera about 180 allows for the
acquisition of sufficient projections for tomographic reconstruction. However,
in practice, opposing views acquired 180 apart differ due to various factors
(photon attenuation, depth dependent collimator response) and SPECT data are
commonly acquired over 360. The theory of computed tomography determines
the number of angular samples that are required, but for many SPECT studies,
around 128 views may be acceptable. The time needed to acquire these multiple
projections with adequate statistical quality is a practical problem for clinical
SPECT where patient motion places a limit on the time available for data
acquisition. In an effort to address this issue, a common approach in modern
SPECT designs is to increase the number of detector heads, so that multiple
views can be acquired simultaneously. Dual detector head systems currently
predominate, although triple detector head gamma cameras also exist. Increasing
the number of detector heads increases the effective sensitivity of the system for
SPECT, at the expense of increasing cost. Dual head gamma cameras are often
considered the preferred configuration for systems intended not just for SPECT
but also for general purpose applications, including whole body studies where
simultaneous acquisition of anterior and posterior planar images is required.
In addition to the rotational motion required for SPECT, flexibility is
also required in the relative positioning of the detector heads. For general
purpose SPECT with a dual head system, the two heads are typically oriented
in an opposing fashion (sometimes referred to as H-mode) and 360 sampling
is achieved by rotation of the gantry through 180. In contrast, cardiac SPECT
is often performed with the detectors oriented at 90 to each other (sometimes
referred to as L-mode). In this mode, the gantry rotates through 90 and the two
detectors acquire projections about 180 from the right anterior oblique position to
the left posterior oblique position. Despite acquiring only 180 of data, this mode
has advantages for cardiac applications as it canminimize the distance between
the heart and the detectors, thus reducing attenuation and depth dependent losses
in spatial resolution. Other approaches tominimizing the distance between the
detectors and the patient during SPECT data acquisition involve further control of
the rotational motion of the detector heads. For detectors rotating about a circular
orbit, this involves adjusting the radius of rotation for individual studies, so as
tominimize the source to collimator distance. Other options include detectors
that rotate, not in a circular orbit, but in an elliptical orbit, or, alternatively, a
variable rotational motion that contours to the outline of the body.
The flexibility of the motions that are available in modern SPECT systems
makes it particularly important to ensure that the detectors are correctly aligned.
This means that the specified angle of rotation is accurately achieved at all angles.
The detector heads also need to be perfectly oriented parallel to the z axis of the
system, such that each angular view is imaging the same volume. Furthermore,
342
it is important that the centre of each angular projection is consistent with the
centre of mechanical rotation. Errors due to these factors can potentially lead
to a loss of spatial resolution and the introduction of image distortion or ring
artefacts. In order to identify and correct these issues, an experimental centre of
rotation procedure is employed. A small point source is placed in the FOV at an
off-centre location. SPECT data acquisition is performed and deviations from the
expected sinusoidal pattern are measured in the resulting sinograms.
FIG.11.22. (a)A series of planar views acquired at different angular orientations. A sample of
four views has been extracted from a total of 128 views acquired about 360. It should be noted
that the z axis represents the axial position and is the axis of gantry rotation. (b)A sinogram
corresponding to a particular axial location. The red lines in (b)indicate 1D projections that
have been extracted from the corresponding planar views shown in (a).
CHAPTER 11
P (t ) =
a(ln + tm )e
dl
(11.8)
where the geometry is illustrated in Fig 11.23. In this equation, the unit vectors
n and m are as described in the legend of Fig. 11.23, t is the transaxial distance
in the projection from the projected position of the origin and l is the distance
along the projection line from the face of the detector. a(x) is the activity
distribution and gives the activity at point x. It should be noted that the integral
in the exponent represents the integral through the attenuation distribution (x)
from the point x = ln + tm. Thus, the exponential represents the attenuation of
photons emitted at x as they travel back towards the detector.
FIG. 11.23. Projection geometry used to describe the attenuated projection in Eq. (11.8). In
this figure, the projection is at an angle . A parallel-hole collimator is assumed, and the unit
vector n is perpendicular to the collimator and parallel to the projection rays. The unit vector
m is parallel to the collimator face and perpendicular to n. The variable t is the distance
along the detector from the projected position of the origin.
344
As can be seen from the above equation, unlike PET, the attenuation
is not constant for a projection ray, but instead varies along the ray. Using
reconstruction methods that do not model this effect produces both artefacts and
a loss of quantitative accuracy in the resulting images. The artefacts can include
streak artefacts, resulting from highly attenuating objects such as bones, catheters
or medical devices; shadows, due to higher attenuation between an object in
some views than in others (e.g. breast or diaphragm artefacts in cardiac SPECT);
and a generally reduced image intensity in the centre of the image.
The first requirement to compensate for attenuation is knowledge of the
attenuation distribution in the patient. This is done by either assuming uniform
attenuation inside the object and extracting information about the body outline
from the emission data or using a direct transmission measurement. Assuming
a uniform attenuation distribution in the patient is only valid in regions such as
the head. Even in the head, bone and regions containing air, such as the sinuses,
result in imperfect estimates of the attenuation distribution and lead to imperfect
attenuation compensation. Myocardial perfusion imaging is an important
application for SPECT and, since attenuation can produce artefacts that obscure
actual perfusion defects, a number of commercial devices have been developed
to allow measurement of the attenuation distribution in the body. All of these
devices use transmission CT techniques to reconstruct the attenuation distribution
inside the body. The devices that have been developed can be divided into two
general classes: devices using radionuclide sources and devices based on X ray
tube sources. In both cases, a source of X rays or radiation is aimed at the body
and a detector on the opposite side of the body measures the transmitted intensity.
The intensity I(t) passing through the body for a source with incident intensity
I0, projection position t and projection view is given by:
I (t ) = I 0 (t )e
(ln +tm ) dl
0
(11.9)
CHAPtER 11
described above, but the same basic principles apply. The fan-beam geometry
has the advantage of having much higher sensitivity and, thus, of requiring lower
activity sources. However, the fan-beam geometry results in magnification and,
thus, the FOV is smaller than the size of the detector. It should be noted that in the
fan-beam geometry the fan lies in the transaxial plane and, thus, the truncation is
in the transaxial direction. For the scanning line source systems, an electronic
window is used so that transmission data are acquired only in the region directly
under the line source. To overcome the low sensitivity of parallel-beam geometry,
high activities (1.8518.5 GBq) are used. This reduces the contamination of the
transmission data by emission activity. The scanning line source adds additional
mechanical complexity to the system and, as a result, several other designs have
been proposed.
In multiple line source systems, a small number of line sources are used.
The line sources are placed close enough together so that the object is covered
due to the finite acceptance angle of the camera collimator, in effect acting like a
non-uniform sheet source. One advantage of this geometry is that weaker sources
can be used for the sources near the edge of the camera, since these correspond
to thinner regions of the body. This can help reduce the effects of high count
rates. The final geometry proposed is the half-cone geometry shown above. In
this design, a high energy 133Ba point source is used. Many of the high energy
photons penetrate through the collimator, creating a half-cone-beam geometry.
This allows a parallel-hole collimator to be used both for transmission and
emission imaging. The use of a point source simplifies shielding of the source.
In general, radionuclide transmission sources have a number of
disadvantages. These include the fact that the source decays and must be replaced.
There are also limits on transmission count rates imposed by the gamma camera,
resulting in relatively noisy transmission images. In addition, if the count rate
due to the emission activity within the patient is high, the transmission images
can be degraded, resulting in inaccurate attenuation maps. Finally, with the
sheet source, scanning line source and multiple line source geometries, the
resolution of the transmission scan is limited by the combination of source and
camera collimators. In general, these provide lower resolution transmission
scans. However, one advantage of radionuclide based transmission systems is
the potential to perform simultaneous imaging, thus eliminating the need for
an additional transmission scan. Another advantage, especially when acquired
simultaneously, is that registration of the emission and transmission images is
guaranteed. Finally, the use of radionuclide sources with a small number of high
energy photopeaks makes converting the transmission images into an attenuation
map at the energy of the emission source easier than for X ray CT based systems,
which use X ray tubes having continuous X ray energy spectra.
347
CHAPTER 11
water at the emission energy divided by that at the transmission source energy.
However, more accurate results can be obtained using the bilinear scaling method
described below for use with X ray CT images.
For hybrid SPECT/CT systems where a conventional X ray CT image
is transformed into an attenuation map, there are a number of additional
considerations. First, transmission data are obtained using a polychromatic
source. There can, thus, be substantial beam hardening. Fortunately, beam
hardening and other corrections routinely applied in X ray CT scanners to produce
images in Hounsfield units (HU) eliminate many of these concerns. In this case,
the CT image in Hounsfield units can be transformed to the attenuation map via
piecewise linear scaling, where, in effect, pixels with values less than 0 HU are
treated as water with densities ranging from 0 to 1, pixel values between 0 and
1000 HU are treated as a mixture of bone and water, and pixel values greater
than 1000 HU are treated as dense bone. Thus, for a pixel having a valueh in
Hounsfield units, the attenuation map value is given by:
1000 + h
water
for h 0
1000
h
(h) = water +
water ) for 0 < h < hbone (11.10)
(
hbone bone
h
bone
for h > hbone
hbone
where water and bone are the attenuation coefficients of water and bone,
respectively, for the energy of the photopeak of the imaging radionuclide.
Once the attenuation map is obtained, attenuation correction can be
implemented using analytical, approximate or statistical image reconstruction
algorithms. Generally, analytical methods are not used due to their poor noise
properties. Approximate methods include the Chang algorithm. This method is
often used in regions of the body where the attenuation coefficient is assumed
to be uniform and the actual attenuation distribution is not measured, but instead
is approximated from the boundary of the object, which is usually assumed to
be an ellipse. In the Chang method, an image reconstructed using filtered back
projection (i.e. without attenuation compensation) is approximately compensated
for attenuation. This approximate compensation is obtained for each voxel by
dividing the uncorrected image signal by the average of the attenuation factors
that correspond to each projection view. For a uniform attenuator with an assumed
elliptical boundary, these attenuation factors can be calculated analytically.
However, the Chang method is approximate and has poor noise properties.
349
CHAPTER 11
w lower w upper 2
where
are the counts in the lower and upper scatter windows,
clower and cupper
respectively;
and wpeak, wlower and wupper are the widths of the photopeak, lower scatter and
upper scatter windows, respectively.
350
It should be noted that in Fig.11.25 the scatter windows are adjacent to the
photopeak energy window, but this is not necessary, nor, in all cases, desirable. In
fact, it is desirable to position the windows as close as possible to the photopeak
window, while having only a small fraction of the photopeak photons detected in
the scatter windows. The width of the scatter windows is a compromise between
obtaining as accurate a scatter estimate as possible, which favours a narrow
energy window, but having an estimate that is low in noise, which favours a wide
energy window.
W
FIG.11.25. Illustration of the use of a trapezoidal approximation to estimate the scatter in the
photopeak energy window in the triple energy window method scatter compensation for 99mTc.
It should be noted that in this example the windows are not necessarily optimally placed. In
particular, the scatter windows are positioned such that there is a non-zero contribution from
unscattered photons in the scatter energy windows. This is especially evident for the upper
energy window. For the case of 99mTc, the counts in the upper window are often assumed to be
zero.
CHAPTER 11
the reconstruction using a projector and back projector algorithm. Thus, CDR
compensation is accomplished by modelling the CDR in the projector and back
projector. This is often implemented by rotating the image estimate, so that it is
parallel to the collimator face at each projection view. In this orientation, the CDR
is constant in planes parallel to the collimator face and can, thus, be modelled by
convolution of the CDR for the corresponding distance. In order to do this, the
distance from the plane to the face of the detector is needed. This is somewhat
complicated by the use of non-circular orbits and, in this case, manufacturers
need to store the orbit information (distance from the collimator face to the centre
of rotation for each projection view) with the projection image. In addition, a way
of estimating the CDR is needed. Analytical formulas exist for calculating the
geometric component of the CDR. Alternatively, a Gaussian function fit to a set
of measured point response functions measured in air can be used. Compensation
for the full CDR, including septal penetration and scatter, requires a CDR that
includes these effects. Analytical formulas do not exist, so either numerically
calculated (e.g. using Monte Carlo simulations of the collimatordetector system)
or measured CDRs are used. Various optimization and speed-up techniques have
been implemented to reduce the time required for CDR modelling.
It should be noted that CDR compensation does not fully recover the loss
of resolution of the collimator: the resolution remains limited and spatially
varying and partial volume effects are still significant for small objects. In
addition, CDR compensation results in correlated noise that can give a blobby
texture to the images (though the images do indeed seem qualitatively less
noisy), and results in ringing artefacts at sharp edges. Despite these limitations,
CDR compensation has generally been shown to improve image quality for both
detection and quantitative tasks and has been used as a way to allow reduced
acquisition time.
11.3. PET SYSTEMS
11.3.1. Principle of annihilation coincidence detection
Radioactive decay via positron emission is at the heart of the PET image
formation process [11.3]. Positrons are emitted from the nucleus during the
radioactive decay of certain unstable, proton-rich isotopes. These isotopes
achieve stability by a decay process that converts a proton to a neutron and is
associated with the creation of a positron. A positron is the antimatter conjugate
of an electron and has the same mass as an electron but positive charge. As
with decay, positrons are emitted from the nucleus with different energies.
These energies have a continuous spectrum and a specific maximum value
353
CHAPTER 11
that is characteristic of the parent isotope. Once emitted from the nucleus, the
positron propagates through the surrounding material and undergoes scattering
interactions, changing its direction and losing kinetic energy (Fig.11.26). Within
a short distance, the positron comes to rest and combines with an electron
from the surrounding matter. This distance is dependent on the energy of the
positron, which is itself a function of the parent isotope and is typically on the
order of a millimetre. The combination of a positron and an electron results in
the annihilation of both particles and the creation of two photons, each with an
energy of 511keV, equivalent to the rest masses of the two original particles.
Conservation of momentum, which is close to zero immediately before
annihilation, ensures both photons are emitted almost exactly 180 apart. These
characteristic photon emissions (known as annihilation radiation) always
511keV, always emitted simultaneously and almost exactly 180 apart form
the basis of PET and result in distinct advantages over single photon imaging in
terms of defining the LOR.
Positronelectron
annihilation
FIG.11.26. Positrons emitted from a radioactive nucleus propagate through the surrounding
material before eventually coming to rest a short distance from their site of emission. At this
point, the positron annihilates with an electron, creating two 511keV photons that are emitted
approximately 180 apart. The perpendicular distance from the line defined by the two photons
to the site of positron emission places a limit on the spatial resolution that can be achieved
with PET systems.
FIG.11.27. The back to back photons that result from positronelectron annihilation can
potentially be measured by detectors placed around the source. Coincidence detection involves
the association of detection events occurring at two opposing detectors (A and B) based on
the arrival times of the two photons. A line of response joining the two detectors is assumed to
intersect the unknown location of the annihilation event. Coincidence detection obviates the
need for a collimator and is sometimes referred to as electronic collimation.
355
CHAPTER 11
Radial
resolution
Tangential
resolution
FIG.11.28. (a)Photon penetration between adjacent detectors in a ring based system leads
to mis-positioning of events. This primarily affects the radial component of spatial resolution
which degrades with distance from the centre of the field of view. (b) Residual momentum
of the positron and electron immediately before annihilation causes the two 511keV photons
to deviate slightly from the expected 180 angle. As a result, a line joining detection events
does not intersect the exact point of annihilation. The extent of this non-collinearity is greatly
exaggerated in the figure, but it does contribute to a loss of spatial resolution, especially for
large diameter PET systems.
356
Fig.11.29. Images of the same phantom, each showing different statistical quality. The images
shown in (a), (b), (c), (d), (e)and (f)were acquired for 1, 2, 3, 4, 5 and 20min, respectively.
Increasing the acquisition time increases the total number of true coincidence events and
reduces statistical variability in the image.
357
CHAPTER 11
quality of the measured coincidence data not only reduces image noise but also
allows the opportunity to reduce image smoothing and improve spatial resolution.
The need to optimize this trade-off between statistical noise and spatial resolution
influences both image reconstruction development and scanner design.
Noise in PET images is influenced by a number of factors, including the
sensitivity of the detector system, the amount of radioactive tracer administered
to the patient and the amount of time the patient can remain motionless for
an imaging procedure. Limitations on the latter two factors mean that high
sensitivity is an important objective for scanner design. Sensitivity is determined
by the geometry of the detector arrangement and the absorption efficiency of
the detectors themselves. Reducing the distance between opposing detectors
increases the solid angle of acceptance and increases sensitivity. However, the
requirement to accommodate all regions of the body imposes aminimum ring
diameter for whole body systems. For such systems, extending the axial field
of view (AFOV) of the detector system provides a mechanism for increasing
sensitivity. Cost constraints have prevented the construction of PET systems
that cover the entire length of the body. However, extended axial coverage can
be achieved using systems with much smaller AFOVs by scanning sections
of the body in a sequential fashion. For such systems, extending the AFOV of
the scanner not only increases sensitivity but also reduces the number of bed
translations required for whole body coverage. In addition to the geometry of the
detector system, sensitivity is also determined by the absorption efficiency of the
detectors. A high absorption efficiency for 511keV photons is desirable in order
to make best use of those photons that are incident upon the detectors. Absorption
efficiency or stopping power of the detector material is, therefore, an important
consideration for PET system design.
11.3.2.3. Quantitative accuracy
One of the strengths of PET is its capability to quantify physiological
processes in vivo. A prerequisite for this kind of quantitative analysis is that the
images accurately reflect the local activity concentration in the body. In order to
ensure this kind of quantitative accuracy, it is important tominimize effects that
corrupt the data and to correct residual corruption as necessary. Quantitative error
can arise from many sources but is primarily due to random coincidence events,
photon scatter within the body, photon attenuation within the body and detector
dead time. Figure11.30 illustrates some of these situations.
Figure11.30(b)illustrates a type of unwanted coincidence event that can
occur when photons from unrelated annihilation events are detected at
approximately the same time. Two photons detected within a short time interval
(coincidence timing window) will be associated with each other under the
358
FIG.11.30. (a) A true coincidence event can occur when both photons escape the body
without interacting. (b)A random coincidence event occurs when two photons from unrelated
annihilation events are detected at approximately the same time. (c)A scattered coincidence
event can occur when either photon is scattered within the body but is still detected. (d)No
coincidence event is recorded when one or both photons are attenuated, typically due to scatter
out of the field.
359
CHAPTER 11
361
CHAPTER 11
FIG.11.31. Example of bismuth germanate crystals used for PET (a). Bismuth germanate
samples photographed under room lighting (b) and in the presence of X ray irradiation
and dimmed room lighting (c). The scintillation light seen in (c)is due to the interaction of
radiation with the crystals, which causes electrons to become excited. When they return to
their ground state, energy is emitted, partly in the form of visible light.
362
NaI
BGO
LSO
0.34
0.95
0.87
230
300
40
100%
15%
75%
6.6
10.2
10.0
Although NaI(Tl) is ideal for lower energy single photon imaging, its
relatively low linear attenuation coefficient for 511keV photons makes it less
attractive for PET applications. Sensitivity could potentially be increased by
increasing the thickness of the crystals, which are typically 13cm thick.
However, the scope for substantially increasing crystal thickness is limited as it
results in a loss of spatial resolution. This is because thicker crystals are prone
to more significant depth of interaction problems as the apparent width of the
detector increases for sources located off-centre. Thin crystals composed of a
material with a high stopping power for 511keV photons are, thus, desirable
to ensure best possible sensitivity while maintaining spatial resolution. For this
reason, BGO and, more recently, LSO have replaced NaI(Tl) as the scintillator of
choice for PET.
BGO has the advantage of a high stopping power for 511keV photons and
has become an important scintillator for PET applications. However, it is not ideal
in many respects as it has relatively poor energy resolution and a long crystal decay
time. The poor energy resolution translates into a limited ability to reject scatter
via energy discrimination. In addition, the long decay time translates into a greater
dead time and increased number of random coincidences at high count rates. As
such, BGO is well suited for scanner designs thatminimize scatter and count rate
via physical collimation, such as those with interplane septa (seeSection 11.3.4.2).
Attempts to increase sensitivity by removing the interplane septa typically result
in a high scatter, high count rate environment for which BGO is not ideal.
Although LSO has a lower linear attenuation coefficient than BGO, its
shorter crystal decay time and slightly improved energy resolution convey
363
CHAPTER 11
Space and cost constraints mean that individual scintillation crystals are
not usually coupled directly to individual photodetectors in a one to one fashion.
Instead, the most common arrangement is a block detector in which a group of
crystal elements share a smaller number of PMTs (Fig.11.32). The design of each
block varies between manufacturers and scanner models but usually involves a
matrix of crystal elements, a light guide and four PMTs. An example configuration
might be an 88 array of closely packed 4.4mm4.0mm30mm crystal
elements, where the longest dimension is in the radial direction to maximize
detection efficiency. The light guide allows light to be shared between four
circular PMTs and the relative light distribution depends on the location of the
crystal in which the photon interacted. The (x, y) position of the detection event is
calculated from the outputs of the four PMTs using a weighted centroid algorithm,
similar to the Anger logic of a gamma camera. Although individual crystals can
be identified in this way, the response is not linear throughout the block due to
differences in the locations of the different crystal elements relative to the PMTs.
Experimentally determined look-up tables are used to relate the measured (x, y)
position to a corresponding detector element, effectively performing a form of
linearity correction. In this way, only four PMTs are needed to localize signals
from a much greater number of crystal elements. The number of crystal elements
divided by the number of PMTs in a PET system has been referred to as the
encoding ratio. A high encoding ratio implies lower production costs and is,
therefore, desirable.
FIG.11.32. (a)A PET detector block consisting of scintillator material coupled to an array of
photomultiplier tubes. The scintillator is cut into an array of individual crystal elements. Four
photomultiplier tubes are used to read out the signal from the 88 array of crystal elements.
(b)The x and y position of each photon is determined from the signal measured by each of the
four photomultiplier tubes labelled AD, using the equations shown.
365
CHAPTER 11
One of the advantages of the design described above is that each block
operates independently of its surrounding blocks. This leads to good count rate
performance as light is not transferred between blocks and the PMTs of one
block are unaffected by detection events in an adjacent block. An alternative
arrangement, referred to as quadrant sharing, increases the encoding ratio by
locating the PMTs at the corners of adjacent blocks. This arrangement differs
from the conventional block design in that each PMT can now be exposed to
light from up to four different blocks. This can result in better spatial resolution
and a higher encoding ratio but is also susceptible to greater dead time problems
at high count rates.
Another alternative to the block design adopts an approach similar to
that used in conventional gamma cameras. These Anger-logic designs involve
detector modules that have a much larger surface area compared to conventional
block detectors, e.g. 92mm176mm. Each module is comprised of many small
crystal elements which are coupled, via a light guide, to an array of multiple
PMTs. Light is spread over a larger area than in the block design and positional
information is obtained using Anger-logic in the same way as a gamma camera.
The PMTs used in this design are typically larger than those used in block
detectors, increasing the encoding ratio. The larger area detector modules
encourage more uniform light collection compared to block designs, which leads
to more uniform energy resolution. However, a disadvantage of this design is that
the broad light spread among many PMTs can lead to dead time problems at high
count rates.
11.3.3.3. Scanner configurations
The detectors described above form the building blocks used to construct
complete scanner systems. Various scanner configurations have been developed,
although the dominant design consists of a ring of detectors that completely
surrounds the patient (or research subject) in one plane (Fig.11.33(a)). As with
other scanner systems, this plane is referred to as the transverse or transaxial
plane and the direction perpendicular to this plane is referred to as the axial or
z direction. Several rings of detectors are arranged in a cylindrical geometry,
allowing multiple transverse slices to be simultaneously acquired. As coincidence
detection requires two opposing detectors, a full ring system of this sort allows
coincidence data to be acquired at all angles around 180. Although complete
angular coverage is achieved in the transverse plane, there is much more limited
coverage in the axial direction. Cost constraints and, to some extent, limited
patient tolerance of extended tunnels mean that the detector rings usually extend
for only a few centimetres in the axial direction. Human whole body systems
typically have an AFOV of around 1520cm, although the trend in scanner
366
design has been to increase the AFOV, thus increasing both sensitivity and the
number of transverse slices that can be simultaneously acquired.
FIG.11.33. (a) Full ring PET system shown in the transverse plane, indicating how each
detector can form coincidence events with a specific number of detectors on the opposite side
of the ring. For clarity, this fan-like arrangement of lines of response is shown for only eight
detectors. The dashed line indicates how the imaging field of view is necessarily smaller than
the detector ring diameter. (b) PET system shown in side elevation, indicating the limited
detector coverage in the z direction. The shaded area indicates the coincidence field of view.
The dashed lines indicate the singles field of view. End shields reduce, but do not eliminate,
detection of single photons from outside of the coincidence field of view when operating in
3D mode.
The diameter of the detector ring varies considerably between designs and
this dimension reflects the intended research or clinical application. Small animal
systems may have ring diameters of around 15cm, brain oriented human systems
around 47cm and whole body human systems around 90cm. Systems with ring
diameters that can accommodate the whole body are clearly more flexible in terms
of the range of studies that can be performed. However, smaller ring diameters
have advantages in terms of increased sensitivity, owing to a greater solid angle
of acceptance, and potentially better spatial resolution, owing to reduced photon
non-collinearity effects. It should be noted that the spatial resolution advantage is
complicated by greater depth of interaction problems as the detector ring diameter
decreases and shorter crystals or depth of interaction measurement capability
may be required. Furthermore, the effective imaging FOV is always smaller than
the detector ring diameter because the acquisition of coincidence events between
all possible detector pairs (such as those between nearby detectors in the ring)
367
CHAPTER 11
is not supported. In addition, PET systems have annular shields at the two ends
of the detector ring that reduce the size of the patient port. These end shields
are intended to decrease the contribution of single photons from outside the
coincidence FOV (Fig.11.33(b)). The coincidence FOV refers to the volume that
the detector system surrounds, within which coincidence detection is possible.
Single photons originating from outside the coincidence FOV cannot give rise to
true coincidence events but may be recorded as randoms and can also contribute
to detector dead time. Reducing the size of these end shields allows the patient
port size to be increased but also leads to greater single photon contamination.
Unlike rotating camera SPECT systems, where different projections are
acquired in a sequential fashion, full ring PET systems simultaneously acquire
all projections required for tomographic image formation. This has an obvious
advantage in terms of sensitivity, and it also enables short acquisition times,
which can be important for dynamic studies. Full ring systems are, however,
associated with high production costs and, for this reason, some early PET
designs employed a partial ring approach. In these designs, two large area
detectors were mounted on opposite sides of the patient and complete angular
sampling was achieved by rotating the detectors around the z axis. Gaps in the
detector ring led to reduced sensitivity and the partial ring design is now usually
reserved for prototype systems. Another related approach to PET system design
was to use dual head gamma cameras modified to operate in coincidence mode.
The use of modified gamma cameras allowed for lower cost systems capable of
both PET and SPECT. However, the poor performance of NaI based PET means
that this approach has now been discontinued.
Current clinical systems have an AFOV that is adequate to cover most
individual organs but in order to achieve coverage of the whole body, patient
translation is required. Given the clinical importance of whole body oncology
studies, the mechanism for translating the patient through the scanner has
become an important component of modern PET systems. The patient bed or
patient handling system has to be made of a low attenuation material but must
still be able to support potentially very heavy patients. It must be capable of a
long travel range, so as to allow whole body studies in a single pass without the
need for patient repositioning. Precise motion control is also critical, particularly
for PET/CT systems where accurate alignment of the two separately acquired
modalities is essential. Advanced patient handling systems have been specifically
developed for PET/CT to ensure that any deflection of the bed is identical for
both the CT and PET acquisitions, thus ensuring accurate alignment irrespective
of patient weight. Although most patient beds have a curved shape for improved
patient comfort and better mechanical support, many manufacturers can also
provide a flat pallet that is more compatible with radiation treatment positioning.
368
369
CHAPTER 11
Amplifier
Pulse
height
analyser
To sorter
Coincidence
processor
Amplifier
Pulse
height
analyser
FIG.11.34. Coincidence circuits allow two photon detection events to be associated with each
other based upon their arrival times. Photons detected at A and B produce signals that are
amplified and analysed to determine whether they meet the energy acceptance criteria. Those
signals that fall within the energy acceptance window produce a logic pulse (width ) that is
passed to the coincidence processor. A coincidence event is indicated if both logic pulses fall
within a specified interval (2).
Coincidence detection assumes that only two photons were detected, but
with multiple disintegrations occurring concurrently, it is possible for three or
more photons to be recorded by separate detectors within the coincidence time
window. When this occurs, it is unclear which pair of detectors corresponds to a
legitimate coincidence and multiple events of this sort are often discarded. This
circumstance is most likely to occur when there is a large amount of activity in
or around the FOV, and it contributes to count loss at high count rates. Another
possible scenario is that only one photon is detected within the coincidence time
window and no coincidence event will be recorded. These single photon detection
events are a result of a number of reasons: the angle of photon emission was such
that only one of the two annihilation photons was incident upon the detectors;
one of the two annihilation photons was scattered out of the FOV; one of the two
annihilation photons was absorbed within the body; and photons originating from
outside the coincidence FOV. Although these single photons cannot form true
coincidences, they are a major source of randoms.
The mechanism described above records what are known as prompt
coincidence events which consist of true, random and scattered coincidences.
370
The relative proportion of each component depends on factors such as the count
rate, the size of the attenuation distribution (patient size) and the acquisition
geometry (2D or 3D). Only the trues component contributes useful information
and the randoms and scattered coincidences need to beminimized. The randoms
component is maintained as low as possible by setting the coincidence time
window to the shortest duration consistent with the time resolution of the system.
The scatter component is maintained as low as possible by energy discrimination.
In addition to providing positional and timing information, the pulse produced
by the detectors can be integrated over time to provide a measure of the energy
deposited in the detector. In a block detector with four PMTs, the sum of the
signals from each PMT is proportional to the total amount of scintillation light
produced and, thus, the total energy deposited in the detector material. Under
the assumption that the photon was completely absorbed in the detector, this
signal provides a measure of the photons energy and can be used to reject lower
energy photons that have undergone Compton scattering within the patient. In
practice, the energy resolution of most PET detector systems is such that the
energy acceptance window must be set quite wide to avoid rejecting too many
unscattered 511keV photons. For BGO based systems, an energy acceptance
range of 350650keV is typical. As small-angle scatter can result in only a small
loss of energy, many of these scattered photons will be accepted within the energy
window, despite the fact that they do not contribute useful information. Energy
discrimination, therefore, reduces, but does not eliminate, scattered coincidence
events and additional compensation is required.
11.3.4.2. Data acquisition geometries
Scanners consisting of multiple detector rings provide extended axial
coverage and are advantageous for rapid acquisition of volumetric data.
However, the presence of multiple detector rings raises issues concerning the
optimum combinations of detectors that should be used to measure coincidence
events. In a system with only one ring of detectors, the acquisition geometry is
simple as each detector measures coincidence events with other detectors on
the opposite side of the same ring. When additional detector rings are added to
the system, it is possible to allow coincidence events to be recorded between
detectors in different rings. This means including photons that were emitted in
directions that are oblique to the transverse plane. Given that photons are emitted
in all directions, increasing the maximum ring difference increases the angle of
obliqueness that is accepted and, therefore, increases system sensitivity. The data
acquisition geometry refers to the arrangement of detector pairs that are permitted
to form coincidence events and, in practice, involves the presence or absence
of interplane septa (Fig.11.35). Data acquisition with septa in place is referred
371
CHAPTER 11
FIG.11.35. (a)2D and (b)3D acquisition geometries. In 2D mode, a series of annular septa
are inserted in front of the detectors so as to absorb photons incident at oblique angles. In 3D
mode, these septa are removed, allowing oblique photons to reach the detectors. 3D mode is
associated with high sensitivity but also increased scatter and randoms fractions, the latter
partly due to single photons from outside the coincidence field of view.
372
In 3D acquisition mode, the septa are entirely removed from the FOV and
there is no longer any physical collimation restricting the photons that are incident
upon the detectors. Coincidence events can be recorded between detectors in
different rings and potentially between all possible ring combinations. Photons
emitted at oblique angles with respect to the transverse plane are no longer
prevented from reaching the detectors, and system sensitivity is substantially
increased compared to 2D acquisition. Sensitivity gains by a factor of around
five are typical, although the exact value depends on the scanner configuration
and the source distribution. In 2D mode, sensitivity varies slightly between
adjacent slices but does not change greatly over the AFOV. In 3D mode, the
sensitivity variation in the axial direction is much greater and has a triangular
profile with a peak at the central slice. The triangular axial sensitivity profile
can be understood by considering a point source centrally located in the first
slice at one of the extreme ends of the scanner. True coincidence events can only
be recorded between detectors in the first ring. As the source is moved towards
the central slice, coincidence events can be recorded between an increasing
number of detector ring combinations, leading to an increase in sensitivity. As a
consequence of the substantial sensitivity increase, 3D acquisition is associated
with higher detector count rates, leading to more randoms and greater dead time
than corresponding acquisitions in 2D mode. Furthermore, 3D mode cannot
take advantage of the scatter rejection afforded by interplane septa and, as a
result, records a greatly increased proportion of scattered coincidence events.
The advantage of 3D acquisition is its large increase in sensitivity
compared to 2D acquisition. This would be expected to result in images with
improved statistical quality or, alternatively, comparable image quality with
shorter scan times or reduced administered activity. In practice, evaluating
the relative advantage of 3D acquisition is complex as it is associated with
substantial increases in both the randoms and scatter components. Both of these
unwanted effects can be corrected using software techniques, but these corrections
can themselves be noisy and potentially inaccurate. Furthermore, the relative
contribution of randoms and scattered photons is patient specific. The randoms
and scatter fractions are defined as the randoms or scatter count rate divided by
the trues rate, and both increase with increasing patient size. Both randoms and
scatter fractions are substantially higher in 3D compared to 2D mode. In 3D
mode, scatter fractions over 50% are common, whereas 15% is more typical for
2D mode. Randoms fractions are more variable as they depend on the study, but
randoms often exceed trues in 3D mode.
A figure of merit that is sometimes useful when considering the
performance of scanner systems is the noise equivalent count rate (NECR). The
NECR is equivalent to the coincidence count rate that would have the same noise
properties as the measured trues rate after correcting for randoms and scatter.
373
CHAPTER 11
T2
(11.12)
T + S + 2 fR
where
T, S and R are the true, scatter and random coincidence count rates, respectively;
and f is the fraction of the sinogram width that intersects the phantom.
For a given phantom, the NECR is a function of the activity in the FOV
and is usually determined over a wide activity range as a radioactive phantom
decays (Fig.11.36). The reason for this count rate dependence is twofold: the
randoms rate increases as the square of the single photon count rate (which is
approximately proportional to the activity in the FOV) and the sensitivity of the
scanner for trues decreases with increasing count rates as detector dead time
becomes more significant.
An important factor when considering the relative performance of 2D and
3D acquisition modes is the characteristics of the detector material. In 2D mode,
the septa substantially reduce dead time, randoms and scatter, making the poor
timing and energy resolution of BGO less of a limitation. BGO based systems
are, thus, well suited to 2D acquisition mode. However, for BGO, the sensitivity
advantage of 3D acquisition mode is substantially offset by the high randoms
and scatter fractions that are encountered. For systems based on detectors such as
LSO, the improved timing resolution can be used to reduce the coincidence time
window and, thus, reduce the randoms fraction. The improved energy resolution
also allows the lower level energy discriminator to be raised, resulting in a lower
scatter fraction. LSO or similar fast detector materials are, thus, well suited to
3D acquisition mode. The introduction of these detectors, along with improved
reconstruction algorithms for 3D data, means that 3D acquisition mode now
dominates. Many scanner systems no longer support 2D mode as this allows the
septa to be completely removed from the design, reducing cost and potentially
increasing the patient port diameter.
374
FIG.11.36. The relative proportion of true, random and scattered coincidence events as
a function of activity in the field of view. At low activities, the true coincidence count rate
increases linearly with activity. However, at higher activities, detector dead time becomes
increasingly significant. The trues rate increases less rapidly with increasing activity and
can even decrease at very high activities. The randoms count rate increases with increasing
activity as a greater number of photons are detected. The scatter count rate is assumed to be
proportional to the trues rate. Scanner count rate performance can be characterized using
the noise equivalent count rate (NECR), which is a function of the true, random and scatter
coincidence count rates.
CHAPTER 11
indexed along the y axis by angle and the x axis by distance. For a full ring system,
angular sampling is usually evenly spaced over 180 but the sampling along each
row is slightly non-linear. The separation of adjacent elements in the projection
decreases towards the edges of the FOV owing to the ring geometry. Correction
for this effect, known as arc correction, is required and is usually implemented
during image reconstruction. Adjacent elements within a particular sinogram row
would be expected to be associated with two parallel LORs joining detector pairs
that are next to each other in the ring. In practice, improved sampling is achieved
by also considering LORs that are offset by one detector. Despite the fact that
these LORs are not exactly parallel to the others, these data are inserted into
the sinogram rows as if they came from virtual detectors positioned in the gaps
between the real detectors.
(a)
(b)
s
s
FIG.11.37. Full ring PET scanners simultaneously measure multiple projections at different
angles with respect to the patient. An example showing the orientation of two parallel
projections is shown in (a). Projection data of this sort are typically stored in sinograms;
an example is shown in (b). In a sinogram, each row represents a projection at a different
angle . Each projection is made up of discrete elements that are indexed by s and contain the
number of coincidence counts recorded along individual lines of response. The two example
projections shown in (a)are also highlighted in sinogram (b).
Detector rings
Septa
Lines of
response
FIG.11.38. Side elevation of an eight ring PET scanner in 2D ((a) and (b)) and
3D (c)acquisition modes. (a)Lines of response joining opposing detectors in the same ring
forming direct planes. (b)Lines of response between detectors in adjacent rings. These lines
of response are averaged to form cross planes (dotted line) that are assumed to be located at
the mid-point between adjacent detectors. Both direct and cross planes are simultaneously
acquired during 2D acquisition. (c)3D acquisition in which each ring is permitted to form
coincidence events with all other rings.
377
CHAPTER 11
When the septa are removed, as is the case in 3D acquisition mode, there is
no longer any physical restriction on the detector rings that can be used to measure
coincidence events. An N ring scanner could have a maximum ring difference of
N 1, resulting in up to N 2 possible sinograms. In 2D mode, such a system
would have a total of 2N 1 sinograms, so it can be seen that the total volume
of data is substantially higher in 3D mode. In order to reduce this volume for
ease of manipulation, the maximum ring difference can be reduced. This has the
effect of introducing a plateau on the axial sensitivity profile, converting it from a
triangular to a trapezoidal form. Additionally, several possible ring combinations
can be combined in a similar fashion to that indicated for 2D acquisition mode.
It should be noted that 3D acquisition mode results in data that are redundant
in the sense that only a subset of the sinograms (those in the transverse planes)
are required for tomographic image reconstruction. The purpose of acquiring the
additional oblique data is to increase sensitivity and reduce statistical noise in the
resulting images.
In addition to the sinogram representation described above, some scanners
also support list-mode acquisition. In this mode, coincidence events are not
arranged into sinograms in real time but are recorded as a series of individual
events. This stream of coincidence events is interspersed with time signals and
potentially other signals from ECG or respiratory gating devices. These data can
be used as the input to list-mode image reconstruction algorithms but may also be
sorted into sinograms prior to image reconstruction. The advantage of acquiring
in list-mode is that the sorting of the data into sinograms can be performed
retrospectively. This provides a degree of flexibility that is very helpful when data
are acquired in conjunction with physiological gating devices or when sequential
images over time are of interest. For example, separate ECG gated and dynamic
time series images can be obtained from the same cardiac list-mode acquisition.
Furthermore, certain parameters can be retrospectively adjusted and do not have
to match the parameters chosen at the time of acquisition.
11.3.4.4. Time of flight
Detectors operating in coincidence mode provide spatial information related
to individual positronelectron annihilations but this information is not sufficient
to determine the exact location of each event. A line joining the two detectors
can be assumed to intersect the site of the annihilation but the exact position
along this line cannot be determined. For this reason, PET systems measure
signals from multiple events and the resulting projections are used to reconstruct
images using computed tomography. However, it has long been appreciated that
the difference in the detection times of the two annihilation photons provides
a mechanism for precisely localizing the site of individual positronelectron
378
CHAPTER 11
photons. The resulting low sensitivity of these devices could not be offset by the
improved signal to noise ratio provided by the TOF information and interest in
the method declined. Interest was subsequently rekindled with the introduction
of LSO based systems, which have been able to combine timing resolutions
of around 600 ps with high sensitivity. A timing resolution of 600 ps translates
to a spatial uncertainty of 9cm which, although clearly worse than the spatial
resolution that can be achieved with conventional PET, does represent useful
additional information.
In addition to the high performance required for conventional PET, TOF
PET requires scanners optimized for high timing resolution. The additional TOF
information has data management considerations because an extra dimension has
been added to the dataset. TOF data may be acquired in list-mode and fed directly
to a list-mode reconstruction algorithm that is optimized for TOF. Alternatively,
the data may be reorganized into sinograms where the sinograms have an
additional dimension reflecting a discrete number of time bins. Each coincidence
event is assigned to a particular sinogram depending on the difference in
the arrival times of the two photons. TOF sinograms also require dedicated
reconstruction algorithms that incorporate the TOF information into the image
reconstruction. An interesting feature of TOF PET is that the signal to noise ratio
gain provided by the TOF information is greater for larger diameter distributions
of radioactivity. This is related to the fact that the spatial uncertainty x becomes
relatively less significant as the diameter increases. This has potential benefits for
body imaging, particularly in large patients where high attenuation and scatter
mean that image quality is usually poorest.
11.3.5. Data corrections
11.3.5.1. Normalization
Normalization refers to a software correction that is applied to the measured
projection data in order to compensate for variations in the sensitivity of
different LORs. Without such a correction, images display systematic variations
in uniformity and pronounced artefacts that include spike and ring artefacts at
the centre of the FOV (Fig.11.40). It is somewhat analogous to the uniformity
correction applied to gamma camera images. Sources of sensitivity variations
include:
Detector efficiency variations: The detection efficiency of a particular
LOR depends on the efficiencies of the individual detectors involved.
Individual detectors can have variable efficiency due to differences in PMT
380
CHAPTER 11
382
FIG.11.41. Diagram illustrating the concept of how a delayed coincidence circuit can be
used to estimate the number of random events in the prompt circuit. (a)Detection events from
two opposing detectors, indicating three coincidence events in the prompt circuit. (b) Data
from detector 2 delayed with respect to detector 1 and indicating one coincidence event in this
delayed circuit. The temporal delay prevents true coincidence events from being recorded in
the delayed circuit, but random coincidence events still occur with the same frequency as in
the prompt circuit. If data are acquired with sufficient statistical quality, the total number of
delayed coincidence events provides an estimate of the total number of randoms in the prompt
circuit.
CHAPTER 11
I ( x)
= e x (11.13)
I (0)
where
I(x) is the beam intensity after passing through attenuating material of thickness
x;
and I(0) is the intensity in the absence of attenuation.
The probability that the corresponding photon 2 will also escape the object
is given by p2:
p2 =
I (D x)
= e ( Dx ) (11.14)
I (0)
The probability of both photons escaping the body such that a coincidence
event can occur is given by the product of p1 and p2:
p1 p 2 = e x e ( Dx ) = e D
(11.15)
FIG.11.42. When considering both back to back photons along a particular line of response,
the attenuation experienced by a point source within the body (a)is independent of its location
along the line and is given by eD for a uniformly attenuating object. In (b), the positron
emitting transmission source is outside the patient but experiences the same attenuation as the
internal source when considering coincidence events along the same line of response.
From this equation, it can be seen that the attenuation factor can be obtained
by dividing Nx by N0. The attenuation correction factor is simply the reciprocal
of the attenuation factor and is given by N0/Nx. N0 can be obtained from what
385
CHAPTER 11
The fact that the single photon emissions were at 662keV as opposed to 511keV
and that the transmission data had a large scatter component was problematic but
could be effectively suppressed using software segmentation algorithms.
With the introduction of PET/CT, the need for radionuclide transmission
systems was eliminated as, with careful manipulation, the CT images can be
used not just for anatomic localization but also for attenuation correction. CT
based attenuation correction has a number of advantages, including the fact that
the resulting attenuation correction factors have very low noise due to the high
statistical quality of CT; rapid data acquisition, especially with high performance
multi-detector CT systems; insensitivity to radioactivity within the body; and no
requirement for periodic replacement of sources as is the case with 68Ge based
transmission systems. CT based attenuation correction is significantly different
from radionuclide based transmission methods because the data are acquired
on a separate, albeit well integrated, scanner system using X ray photons with
energies that are very different from the 511keV photons used in PET. Unlike
monoenergetic PET photons, the photons used in CT consist of a spectrum
of energies with a maximum value that is dependent on the peak X ray tube
voltage (kVp). CT Hounsfield units reflect tissue linear attenuation coefficients
that are higher than those applicable to PET photons as they are measured at an
effective CT energy (~70keV), which is substantially lower than 511keV. An
important step in the process of using CT images for attenuation correction is
to scale the CT images to linear attenuation coefficients that are applicable to
511keV photons. A number of slightly different approaches have been employed
but usually involve multi-linear scaling of the CT Hounsfield units using
functions specific for the X ray tubekVp setting (Fig.11.43). The methods used
are very similar to those described in Section 11.2.3.2 (Eq.(11.10)) for SPECT.
After scaling, the CT images are filtered, so as to have a spatial resolution that is
similar to that of the PET data and attenuation factors are calculated by integration
in a manner indicated by Eq.(11.16). The integration, or forward projection, is
performed over all directions measured by the PET system and, thus, provides
attenuation correction factors for all LORs.
CT based attenuation correction has proved to be very effective although
a number of potential problems require consideration. Patient motion,
commonly motion of the arms or head, can cause the CT and PET images to be
misregistered, leading to incorrect attenuation correction factors, which in turn
cause image artefacts and quantitative error. Respiratory motion can also lead to
similar problems as the CT and PET data are acquired over very different time
intervals. CT data acquisition is extremely short and usually captures a particular
phase in the respiratory cycle. In contrast, PET data are acquired over multiple
breathing cycles and the resulting images represent an average position that will
be somewhat blurred in regions where respiratory motion is significant.
387
CHAPTER 11
In the area around the lung boundary where there is a sharp discontinuity
in the bodys attenuation properties, respiratory motion can lead to localized
misregistration of the CT and PET images, and pronounced attenuation
correction artefacts. Another consideration for CT based attenuation correction
arises when the CT FOV is truncated such that parts of the body, usually the
arms, are not captured on the CT image or are only partially included. This
leads to under-correction for attenuation and corresponding artefacts in the PET
images.
PET image
CT image
CT (HU)
FIG.11.43. For PET attenuation correction, CT images have to be rescaled from Hounsfield
units (HU) to linear attenuation coefficients () appropriate for 511keV. Multi-linear scaling
functions have been used. The rescaled CT images are then forward projected to produce
attenuation correction factors that are applied to the PET emission data, prior to or during
image reconstruction.
389
CHAPTER 11
not compensate for the event mis-positioning that can occur as a result of pulse
pile-up.
11.3.5.6. Image calibration
The above corrections substantially eliminate the image artefacts and
quantitative errors caused by the various physical effects that degrade PET data.
As a result, the reconstructed images reflect the activity distribution within the
FOV, within the limitations imposed by the systems limited spatial resolution.
Furthermore, these reconstructed images can be used to quantify the in vivo
activity concentration in a particular organ or tissue. Although this capability is
not always fully exploited, the potential to accurately quantify images in terms of
absolute activity concentration facilitates a range of potential applications.
After image reconstruction, including the application of the various
physical corrections, PET images have arbitrary units, typically counts per
voxel per second. Quantitative data can be extracted from the relevant parts of
the image using region of interest techniques but cannot be readily compared
with other related data such as measurements made with a radioactivity calibrator
(dose calibrator). In order to convert the PET images into units of absolute
activity concentration such as becquerels per millilitre, a calibration factor is
required. This calibration factor is experimentally determined, usually using
a uniform cylinder phantom. The cylinder is filled with a known volume of
water, to which a known amount of radioactivity is added. After ensuring the
radioactivity is uniformly distributed within the phantom, a fully corrected PET
image is acquired. The calibration factor CF can be determined using:
CF =
A p
(11.18)
V C
where
A/V is the known activity concentration (Bq/mL) within the phantom;
C is the mean voxel data (counts voxel1 s1) from a large region well
within the cylinder part of the image;
and p is the positron fraction of the radionuclide used in the calibration experiment
(typically 18F, positron fraction 0.97).
The positron fraction is a property of the radionuclide and is the fraction of
all disintegrations that give rise to the emission of a positron.
391
CHAPTER 11
The above calibration assumes that the true activity within the phantom
is accurately known. This can usually be achieved to an acceptable level of
tolerance using an activity calibrator that has been calibrated for the isotope of
interest using a long lived standard source that is traceable to a national metrology
institute. In principle, a single calibration factor can be applied to subsequent
studies performed with different isotopes as long as the positron fraction is
known. Calibrated PET images can, thus, be determined by multiplying the raw
image data by the calibration factor and dividing by the positron fraction for the
particular isotope of interest.
11.4. SPECT/CT AND PET/CT SYSTEMS
11.4.1. CT uses in emission tomography
SPECT and PET typically provide very little anatomical information,
making it difficult to precisely localize regions of abnormal tracer accumulation,
particularly in oncology studies where disease can be widely disseminated.
Indeed, it is often the case that the more specific the radiopharmaceutical, the
less anatomical information is available to aid orientation. Relating radionuclide
uptake to high resolution anatomic imaging (CT or MRI (magnetic resonance
imaging)) greatly aids localization and characterization of disease but ideally
requires the two images to be spatially registered. Retrospective software
registration of images acquired separately on different scanner systems has
proved to be effective in certain applications, notably for brain studies where
rigid body assumptions are realistic. However, for most other applications,
the rigid body assumption breaks down and the registration problem becomes
much more difficult. Combined scanner systems, such as SPECT/CT and PET/
CT, provide an alternative solution. The advantage of this hardware approach is
that images from the two modalities are inherently registered with no need for
further manipulation. Of course, this assumption can become unreliable if the
patient moves during data acquisition, but, in general, combined scanner systems
provide an accurate and convenient method for achieving image registration.
In addition to the substantial clinical benefit of registered anatomical and
functional images, the coupling of CT with SPECT and PET systems provides
an additional technical benefit. Although radionuclide sources have been used
for attenuation correction, the availability of co-registered CT is particularly
advantageous for this purpose. In the case of SPECT, the main advantages of
CT based attenuation correction are greater accuracy and reliability compared to
radionuclide sources, while in PET, the main advantage is an effective reduction
in the overall duration of the scanning procedure owing to the speed with which
392
393
CHAPTER 11
systems were required and the user interface could be awkward. In addition, the
availability of the CT for attenuation correction meant that the PET transmission
scanning system was somewhat redundant. Subsequent designs removed the PET
transmission sources and moved the two subsystems towards greater integration.
In some cases, this meant a more compact system with a continuous patient
tunnel. In other cases, the PET and CT gantries were separated by a gap which
allowed greater access to the patient during scanning. Removing the transmission
scanning system also provided scope for increasing the size of the patient port,
so as to accommodate larger patients. This was further achieved by removing
the septa from the PET subsystem and decreasing the size of the end-shields
used to reject out of FOV radiation. Although the PET and CT detectors remain
separate subsystems, many software functions of a modern PET/CT system run
on a common platform, including a common patient database containing both
PET and CT data.
FIG.11.45. Coronal images from a whole body fluorodeoxyglucose PET/CT study. (a) The
PET data are shown in inverse grey scale. (b) The same PET data are shown in a colour scale,
superimposed on the CT in grey scale.
CHAPTER 11
involved single slice CT, whereas later systems have incorporated 4, 16, 64 or
greater slice CT. Advanced CT detector capability allows for extended volume
coverage in a single breath-hold or contrast phase and also facilitates a range
of rapid diagnostic CT protocols, particularly those intended for cardiology. The
level of CT capability required by a combined PET/CT system depends on the
extent to which the system will be used for diagnostic quality CT. For many PET/
CT oncology studies, state of the art CT is not necessary and low dose protocols
are favoured. In these cases, the X ray tube current is reduced and intravenous
contrast would typically not be administered. In addition, for whole body studies,
data would not be acquired under breath-hold conditions in order to improve
spatial alignment of the CT with the PET data that are acquired over multiple
respiratory phases.
As far as CT based attenuation correction is concerned, the main advantage
for PET is not so much improved accuracy, as 68Ge transmission sources are
perfectly adequate when obtained with sufficient statistical quality. The main
advantage of CT based attenuation correction is the speed with which the data
can be acquired. This is particularly important for whole body PET studies
where transmission data are needed at each bed position, requiring extended scan
durations to achieve adequate statistical quality. With multi-detector CT scanners,
low noise images can be acquired over the whole body in only a few seconds.
Replacing radionuclide transmission scans with CT had the effect of substantially
reducing the overall duration of scanning procedures, particularly those requiring
extended axial coverage. In turn, shorter scans increase patient comfort and
reduce the likelihood of motion problems. With the shorter scan times, arms
up acquisition is better tolerated by patients, leading to reduced attenuation
and improved image statistical quality. In addition to these methodological
considerations, shorter scan durations allow for more patient studies in a given
time period and, thus, more efficient utilization of the equipment.
Despite the relatively rapid scanning afforded by modern PET/CT systems,
the PET component still typically requires manyminutes of data acquisition and
management of patient motion continues to be a problem. Motion can potentially
cause misalignment of the PET and CT images, degradation of image spatial
resolution and the introduction of characteristic artefacts. Although the CT
images are acquired independently of the PET, the PET images involve CT based
attenuation correction and are, thus, particularly susceptible to patient motion
between the two scans. The types of motion usually encountered include gross
movement, often of the arms or head, and respiratory motion. The former is hard
to correct retrospectively, although external monitoring devices such as camera
based systems can potentially provide solutions for head motion. External
monitoring devices can also help reduce problems due to respiratory motion.
Respiratory gated PET can reduce motion blurring by reconstructing images
396
from data acquired only during particular phases of the respiratory cycle. This
can be further refined by including a motion model into the PET reconstruction
and incorporating CT data also acquired under respiratory control.
REFERENCES
[11.1] ANGER, H.O., Scintillation camera, Rev. Sci. Instrum. 29 (1958) 2733.
[11.2] CHERRY, S.R., SORENSON, J.A., PHELPS, M.E., Physics in Nuclear Medicine,
Saunders, Philadelphia, PA (2003).
[11.3] BAILEY, D.L., TOWNSEND, D.W., VALK, P.E., MAISEY, M.N., Positron Emission
Tomography: Basic Sciences, Springer, London (2005).
[11.4] BEYER, T., et al., A combined PET/CT scanner for clinical oncology, J. Nucl. Med.
41 (2000) 13691379.
397
CHAPTER 12
COMPUTERS IN NUCLEAR MEDICINE
J.A. PARKER
Division of Nuclear Medicine and Department of Radiology,
Beth Israel Deaconess Medical Center,
Harvard Medical School,
Boston, Massachusetts,
United States of America
12.1. PHENOMENAL INCREASE IN COMPUTING CAPABILITIES
I think there is a world market for about five computers this remark is
attributed to Thomas J. Watson (Chairman of the Board of International Business
Machines), 1943.
12.1.1. Moores law
In 1965, Gordon Moore, a co-founder of Intel, said that new memory chips
have twice the capacity of prior chips, and that new chips are released every
18 to 24 months. This statement has become known as Moores law. Moores law
means that memory size increases exponentially. More generally, the exponential
growth of computers has applied not only to memory size, but also to many
computer capabilities, and since 1965, Moores law has remained remarkably
accurate. Further, this remarkable growth in capabilities has occurred with a
steady decrease in price.
Anyone who has even a little appreciation of exponential growth realizes
that exponential growth cannot continue indefinitely. However, the history of
computers is littered with experts who have prematurely declared the end of
Moores law. The quotation at the beginning of this section indicates that future
growth of computers has often been underestimated.
12.1.2. Hardware versus peopleware
The exponential growth of computer capabilities has a very important
implication for the management of a nuclear medicine department. The growth
in productivity of the staff of a department is slow, especially when compared to
the growth in capabilities of a computer. This means that whatever decision was
398
made in the past about the balance between staff and computers is now out of
date. A good heuristic is: always apply more computer capacity and less people
to a new task. Or stated more simply, hardware is cheap, at least with respect
to what you learned in training or what you decided last time you considered the
balance between hardware and peopleware.
12.1.3. Future trends
In the near future, the increase in personal computer capability is likely
to be due to an increase in the number of central processing units on a single
processor chip (cores) and in the number of processing chips in a single computer.
Multiple processing units have been a key feature of supercomputers for many
years. Coordinating the large number of processors is often a bottleneck in the
application of supercomputers for general purpose computing. Supercomputers
have generally been applied to specific tasks, not to general purpose computing.
The trend towards more cores in personal computers has also suffered from
this bottleneck. Existing applications often run marginally faster on multicore
computers than they do on a single core.
Multi-threaded programming, which has recently received more
attention, ameliorates this bottleneck. Multi-threading is an efficient method of
synchronizing subtasks that can be computed independently in parallel. Image
processing, where different parts of the image can be processed independently, is
well suited to multi-threading. Reworking just the intensive processing portions
of the software as a multi-threaded application can greatly improve the overall
speed on a multicore machine. In fact, several basic image processing packages
are already multi-threaded. Thus, with relatively limited updating, nuclear
medicine software should be able to take advantage of the trend towards multiple
processors in a single computer.
An area where multiple processing units are currently used in personal
computers is in graphical processing units (GPUs). GPUs, which perform the
same operation on multiple parts of an image simultaneously, are classified as
single instruction, multiple data processors. These units have traditionally been
thought to be very difficult to program, but recently some programming tools are
making them somewhat more readily accessible. They are still very difficult to
program in comparison to multi-threading; however, the improved programming
tools should make these processors more common for the most computing
intensive data processing tasks. They are more likely to be used in the front-end
computers within the imaging devices (seeChapter11) than in workstations and
servers that will be the focus of this chapter.
399
CHAPTER 12
400
0001
1000
0000
1001
1111
0011
Internet address
24.
9.
243.
0100 1110
4
E
78.
401
CHAPTER 12
Range
Number of values
1 bit
01
1 byte
0255
256
2 bytes
16
065535
64 k
3 bytes
24
016777215
16 M
4 bytes
32
04294967295
4G
5 bytes
40
01099511627 776
1T
where
Symbol
Prefix
Power of 2
Value
Power of 10
kilo
210
1024
~103
mega
220
1048567
~106
giga
230
1073741824
~109
tera
240
1099511627776
~1012
402
CHAPTER 12
2D, 3D, 4D, etc. are often used loosely in medicine, but it can be
insightful to clearly understand the dimensions involved in an application.
A relevant example is human vision. A single human eye can see in only two
dimensions; the retina is a 2D structure. From parallax as well as other
physiological inputs, it is possible for people with binocular vision to perceive
the depth of each point in the visual image. The most appropriate model of the
perceived image is a multi-valued function with value intensity, hue, saturation
and depth. In this model, the function is 2D. This 2D function represents a
surface in a 3D volume. Over time, themind is able to construct a 3D model
from sequential 2D surfaces.
On a more basic level, optics models an electromagnetic signal going
through an aperture as a complex 2D function. In addition to amplitude
information, which can be sensed by the eye, the function also has phase
information. Holography takes advantage of phase information, allowing the
observer to see around objects if there is no other object in the way. However,
basic physics limits the amount of information to a sparse (essentially 2D) set of
data contained within the 3D space.
One of the challenges of data visualization is to facilitate the input of
3D data using the 2D channel afforded by the visual system. Often, one
dimension is mapped into time using cine or a mouse to sequence through a stack
of images. Rendering such as a reprojection or a maximum intensity projection
can also help by providing an overview or by increasing the conspicuousness of
the most important features in the data. Both of these visualization methods also
use a sequence of images to overcome the limitations of the 2D visual channel.
12.2.3.1. Continuous, discrete and digital functions
The real world is usually modelled as continuous in space and time.
Continuous means that space or time can be divided into infinitesimally small
increments. A function f(x) is said to be a continuous function if both the
independent variable x and the dependent variable f are represented by continuous
values. The most natural model for the distribution of a radiopharmaceutical in
the body is a continuous 3D function.
An image that is divided into pixels is an example of a discrete function.
The independent variables x and y, which can only take on the values at particular
pixel locations, are digital. The dependent variable, intensity, is continuous.
A function with independent variable(s) that are digital and dependent variable(s)
that are continuous is called a discrete function.
A computer can represent only digital functions, where both the
independent and dependent variable(s) are digital. Digital values provide a good
model of continuous values if the coarseness of the digital representation is small
404
with respect to the standard deviations of the continuous values. For this reason,
digital images often provide verisimilar representations of the continuous world.
Nuclear medicine is intrinsically digital in the sense that nuclear medicine
imaging equipment processes scintillation events individually. The original Anger
scintillation cameras produced analogue horizontal and vertical position signals,
but modern scintillation cameras process the signals digitally and the output is
digital position signals. Most positron emission tomography (PET) cameras have
discrete crystals, so that the lines of response are intrinsically digital.
12.2.3.2. Matrix representation
The surface of the gamma camera can be visualized as being divided into
tiny squares. An element in a 2D matrix can represent each of these squares.
A 3D matrix can represent a dynamic series of images or single photon emission
computed tomography (SPECT) data collection. A 3D matrix is equivalent
to a 3D digital function, f[x, y, z]. Both representations are equivalent to the
computer program language representation, f[z][y][x].
The lines of response in a PET camera without axial collimation are more
complicated. Perhaps the easiest way to understand the data is to consider a ring
of discrete detectors. The ring can be represented as a 2D array one axial
and one tangential dimension. Any line of response (LOR) can be defined by
two crystals, and each of those two crystals is defined by two dimensions. Thus,
each LOR is defined by four dimensions. A 4D matrix can represent the lines of
response (not all crystal pairs are in coincidence, so the matrix is sparse, but that
is a detail).
It is particularly simple for a computer to represent a matrix when the
number of elements in each dimension is a power of two. In this case, the x, y and
z values are aligned with the bits in the memory address. Early computer image
dimensions were usually powers of two. Modern programming practice has
considerably lessened the benefit of this simple addressing scheme. However,
where hardware implementation is a large part of the task, there is still a strong
tendency to use values that are a power of two.
12.3. IMAGE PROCESSING
This section will present an introduction to the general principles of image
processing.
405
CHAPTER 12
FIG.12.1. The upper left quadrant is a sinusoid with a single frequency in the x direction;
the upper right quadrant is a sinusoid with twice the frequency; the lower left quadrant is a
sinusoid with a single frequency in the y direction; and the lower right quadrant is a single
sinusoid which varies in both the x and y direction.
12.3.1.3. Eigenfunctions
Eigenfunctions or eigenvectors can be a difficult mathematical concept.
They are often presented as an advanced concept. However, eigenfunctions are
actually a basic concept. They are relatively easy to understand if the detailed
mathematics is avoided.
Eigenfunctions are the natural functions of the system. For example,
an eigenfunction of a swing is a sinusoidal back and forth motion. The swing
407
CHAPTER 12
naturally wants to go back and forth in a sinusoidal motion. The output of a system
when the input is an eigenfunction is a scaled version of the input. The system
does not change the shape of the eigenfunction, it just scales the magnitude. For
example, if a pure tone is put into a good audio amplifier, the same tone comes
out, only amplified.
The model implied by these descriptions is that there is a system that has
an input and an output. In the first case, the system is a swing; in the second, the
system is an audio amplifier. It should be noted that eigenfunctions are properties
of systems. A more relevant example is where the system is a gamma camera, the
input is the distribution of radioactivity from a patient and the output is an image.
The swing and the amplifier have sinusoids as eigenfunctions. In fact, a
large class of systems have sinusoids as their eigenfunctions. All linear-timeinvariant or linear-shift-invariant systems have sinusoids as their eigenfunctions.
Time-invariant or shift-invariant mean that the properties of the systems do not
change with time or with position. Real imaging systems are rarely linear-shiftinvariant systems, but there is often a region over which they can be modelled as
a linear-shift-invariant system, so the sinusoids will be very useful.
A common use of eigenfunctions in nuclear medicine is in measuring
the modulation transfer function (MTF). The MTF is measured by imaging a
spatial sinusoid and seeing how well the peaks and valleys of the sinusoid (the
modulation) are preserved (transferred by the system). If they are completely
preserved, the modulation transfer is 1. If the peaks and valleys are only half
of the original, the modulation transfer is 0.5. The typical bar phantom is an
approximation of a sinusoidal function.
The key point for this section is that a spatial sinusoid is an eigenfunction
of the imaging system, at least over some region. The image has the same shape
as the input, only the amplitude (modulation) of the sinusoid is altered. The basic
properties of the system can be determined by measuring the modulation transfer
of the eigenfunctions. Linearity means that the effect of the imaging system on
any signal that is a combination of eigenfunctions can then be determined as a
scaled sum of eigenfunctions.
12.3.1.4. Basis functions
A function, f(t), can be made up of a sum of a number of other functions,
gk(t). For example, a function can be made from a constant, a line passing through
the origin, a parabola centred at the origin, etc. This function can be written as the
sum of K powers of t:
f (t ) =
408
K 1
k=0
a k t k (12.1)
These functions are just polynomials. The terms tk in the polynomial are
basis functions, with coefficients ak. Selecting different coefficients ak, a large
number of functions can be represented.
Usually, the powers of the independent variable tk seem like they are the
most important part of a polynomial, and in many ways they are. However, it will
be useful to shift the focus from the powers of t to the coefficients ak. To make
a new polynomial, the key is to select the coefficients. From this viewpoint, the
tk terms are just placeholders.
Extending this point of view, the coefficients can be thought of as a discrete
function F[k]. The polynomial function could be rewritten:
f (t ) =
K 1
k=0
F [k ]t k (12.2)
409
CHAPTER 12
time, still think of the time domain as the real domain and the frequency domain
as the transform domain.
One of the best examples of duality is MRI. Chemists used to working with
nuclear magnetic resonance think of the time signal as the natural signal and
the frequency signal as the transform signal. Imagers think of the image from a
magnetic resonance imager as the natural signal and the spatial frequencies as the
transform domain. However, due to the gradients, the frequency signal gives the
spatial representation and the time signal is the spatial-frequency signal. There is
no real domain and transform domain; it all depends on the point of view.
12.3.1.6. Fourier transform
The Fourier transform equations can be written compactly using complex
exponentials:
F ( ) =
f (t ) =
f (t)e
1
2
i t
dt (12.3)
F ( )e
i t
d (12.4)
where i is 1 .
The first equation (Eq.(12.3)) from the time or space domain to the
frequency domain is called Fourier analysis; the second equation (Eq.(12.4)),
going from the frequency domain to the time or space domain, is called Fourier
synthesis. Fourier analysis is analogous to polynomial interpolation; Fourier
synthesis is analogous to polynomial evaluation. The relation of these equations
to the sine and cosine transforms can be seen by substituting for the complex
exponential using Eulers formula:
e i t = cos( t ) + i sin( t ) (12.5)
410
f (x, y)e
i( k x x+k y y)
dx dy (12.6)
f ( x, y) =
1
2
F (k , k )e
x
i( k x x+k y y)
dk x dk y (12.7)
The common use of the letter k with a subscript for the spatial frequency
variable has led to the habit of calling the spatial frequency domain the k-space,
especially in MRI. These equations are exactly analogous to the time and
frequency equations with t replaced by x, y, and replaced by kx, ky. In the case
of three dimensions, t is replaced by x, y, z, and by kx, ky, kz.
The previous equations refer to continuous functions. The limits of
integration of the integrals were not specified, but in fact, the limits are assumed
to be to +. However, computer representation is digital. Computer
representation is often thought of as discrete, not digital, since the accuracy of
representation of numbers is often high, so that the quantification effects can be
ignored (seeSection 12.2.3.1). The Fourier transform equations in discrete form
can be written as:
F [k ] =
f [n] =
1
N
N 1
n=0
f [n]e i2 kn / N (12.8)
N 1
k=0
F [k ]e i2 kn / N (12.9)
In image processing, the unit of n is often pixels, and the frequency unit k is
often given as a fraction, cycles/pixel. The space variable n runs from 0 to N 1;
the spatial frequency variable k runs from 0.5 cycles/pixel to (but not including)
+0.5 cycles/pixel in steps of 1/N.
12.3.1.7. Fourier transform as a correlation
Some understanding of how the Fourier transform pair works can be
obtained by noting the analogy between these equations and a correlation. The
correlation coefficient is written:
r=
x y (12.10)
( x y )
i i
2
i
2
i
where x and y are the two variables, and i indices over the number of samples.
The denominator is just normalization, so that r ranges from 1 to 1. The action
is in the numerator. It should be noted that the key feature of a correlation is that
it is the sum of the products.
411
CHAPTER 12
412
h(t ) f ( ) d (12.11)
In three dimensions:
g( x, y, z) =
h(x x', y y', z z') f (x', y', z') dx' dy' dz' (12.13)
CHAPTER 12
h(t ) f ( ) d F ( )H ( ) (12.14)
where the symbol represents Fourier transformation, the left side shows the
time domain operations, and the right side shows the Fourier domain operation.
The complicated integral in the time domain is transformed into a simple
multiplication in the frequency domain.
At first, it may seem that the Fourier transform operation is just as
complicated as convolution. However, since the fast Fourier transform algorithm
provides an efficient method for calculating the Fourier transform, it turns out that,
in general, the most efficient method of performing convolution is to transform
the two functions, multiply their transforms and do an inverse transform of the
product. Calculation of convolutions is one of the major practical applications of
the Fourier transform.
12.3.4. Filtering
The word filtering refers to processing data in the Fourier domain by
multiplying the data by a function, the filter. Conceptually, the idea is that the
frequency components of a signal are altered. The data are thought of in terms
of their frequency or spatial frequency content, not in terms of their time or
space content. The most efficient process in general is (i) Fourier transform,
(ii) multiply by a filter and (iii) inverse Fourier transform. Convolution performs
this same operation in the time or space domain but, in general, is less efficient.
If, however, a filter can be represented in the time or space domain with
only a small number of components that are non-zero, then filter implementation
using convolution becomes more efficient. Many image processing filtering
operations are implemented by convolution. The non-zero portion of the time
414
415
CHAPTER 12
1 2 1
2 4 2
1 2 1
Smoothing
55
1
2
4
2
1
2
4
8
4
2
3 2
8 4
16 8
8 4
3 2
Sharpening
1
1
1
1
1
1
1
1
1
1
1 1 1
1 1 1
25 1 1
1 1 1
1 1 1
Unsharp mask
0
0
1
0
0
0
1
2
1
0
1 0 0
2 1 0
17 2 2
2 1 0
1 0 0
X gradient
0
0
0
0
0
1
1
1
1
1
0 1
0 1
0 1
0 1
0 1
Y gradient
0
1
0
1
0
0
1
0
1
0
0 0 0
1 1 1
0 0 0
1 1 1
0 0 0
1
2
4
2
1
0
0
0
0
0
417
CHAPTER 12
region of constant intensity. The effects are relatively small and are limited to
the edges of the circle. The bottom of Fig.12.2 shows the effect on a very noisy
image of the same object. In this case, the effect is much more pronounced.
FIG.12.2. The upper left quadrant shows a circular region of constant intensity; the upper
right quadrant shows the effect of applying a 5-by-5 smoothing kernel; the lower left quadrant
shows a very noisy version of the circular region; the bottom right quadrant shows the effect of
applying a 5-by-5 smoothing kernel.
1
(1 + (k / k 0 ) 2 n )
(12.16)
The Butterworth filter has two parameters, k0 and n. The parameter k0 is the
cut-off frequency and n is the order. The filter reaches the value 1/ 2 when the
spatial frequency k is equal to k0. The parameter n determines the rapidity of the
transition between the pass-zone and the stop-zone.
The filter is sometimes shown (Fig.12.3) in terms of the square of the filter,
1/(1+(k/k0)2n). In that case, the filter reaches the value of half at the cut-off
frequency. Confusion can arise since sometimes the filter itself is defined as the
square value, i.e. without the square root. Furthermore, the filter is sometimes
defined with an exponent of n instead of 2n.
418
FIG.12.3. This figure shows the square of the Butterworth filter in the spatial frequency
domain with a cut-off equal to 0.25 cycles/pixel and n equal to 2, 3, 5, 7 and 10.
419
CHAPTER 12
FIG.12.4. The upper left quadrant shows a smooth circular region; the upper right quadrant
shows the effect of applying the 5-by-5 sharpening kernel; the lower left quadrant shows a
circular region with a small amount of noise; the bottom right quadrant shows the effect of the
sharpening kernel on amplifying noise.
G( )
(12.17)
F ( )
CHAPTER 12
1
(12.18)
k
It should be noted that the system function is not a function of . The higher
frequencies in the image are reduced by a factor proportional to their spatial
frequencies.
The obvious method of restoring the original image is to multiply it by a
restoration filter given by:
G(k , ) = k (12.19)
effects less. The Metz filter combines these two goals into a single filter. For any
system function, H(k, ), the Metz filter is given by:
G(k, ) = H 1 (k, )(1 (1 H (k, ) 2 ) x ) (12.20)
MTF.
The first term in this equation, 1/H(k, ), reverses the effect of the system.
When H(k, ) is nearly one, the second term is about equal to one; when H(k, )
is nearly zero, the second term is about equal to zero; at intermediate values,
the second term transitions smoothly between these two values. In Fig.12.5, a
simulated system function is shown by the dotted lines. Four Metz filters with
different X parameters are shown. At low frequencies, the Metz filter counteracts
the effects of the imaging system. At high frequencies, it takes on the character
of a low-pass filter. The transition between characters is controlled by the
parameter X.
12.3.7.3. Wiener
Wiener filtering is used when the statistical properties of the signal and of
the noise are known. A function known as the power spectral density, S(), gives
the expected amount of power in a signal as a function of frequency. Power
means that the function is related to the square of the magnitude of a signal. The
Wiener filter is given by:
H ( ) =
S f ( )
S f ( ) + S n ( )
(12.21)
where Sf () is the power spectral density of the signal f(t), and Sn() is the
power spectral density of the noise n(t). Under the proper circumstances, it can
be shown that the Wiener filter is the optimal estimate of a process f(t) given the
noisy data, f(t) + n(t).
For those frequencies where the signal is much larger than the noise, the
Wiener filter is equal to one. For those frequencies where the noise is much larger
than the signal, the Wiener filter is equal to zero. The Wiener filter transitions
smoothly from the pass-zone to the stop-zone when there is more noise in the
data than signal. The Wiener filter provides a solid mathematical basis for the
heuristic that has already been used several times.
423
CHAPTER 12
FIG.12.5. The dotted line shows a simulated modulation transfer function (MTF). The four
solid lines show Metz filters for X equal to 1.5, 2, 2.5 and 3.
424
CHAPTER 12
426
CHAPTER 12
number N of crystals. NN memory elements are needed for the LORs. (Some
potential LORs do not traverse the imaging area.) The crystals in a PET camera
make a 2D array. In the usual cylindrical geometry, the two dimensions are the
axial slice and the angular position around the ring of detectors for that slice.
LORs are defined by two crystals and four dimensions. To limit the electronics,
early PET cameras limited coincidences to a single slice. In that case, the slice
position is the same for both detectors, and the LORs are 3D, one axial and two
angular position dimensions.
12.4.4.1. 2D and 3D volume acquisition
Modern PET cameras can collect oblique LORs, greatly increasing
sensitivity. Some cameras have axial collimators that can be inserted or retracted
while others do not. Imaging with axial collimation involves a 3D object, a
3D image and 3D data. Strangely, this type of imaging is called 2D. Imaging
without axial collimation again involves a 3D object and a 3D image, but in
this case the data are 4D. This type of imaging is called 3D imaging. Although
these terms for imaging with and without axial collimation do not make sense,
they are imbedded in the PET vernacular.
12.4.4.2. Time of flight
Time of flight (TOF) imaging measures the difference in arrival of the two
photons at the two detectors. The difference in arrival time can be used to position
the annihilation event along the LOR. This provides an additional dimension to
the data. Non-axial collimation, TOF PET data are 5D. The resolution along the
LOR is relatively low. The speed of light is about 3108 m/s, which means
that in 1 ns photons travel 30cm. Current detectors can detect differences in
arrival of about 0.5 ns. Thus, improvements using TOF detection are important,
but modest.
12.4.4.3. Sampling requirements in PET
Data sampling is determined by the LORs. Determination of the resolution
of a PET camera is complicated, but once it is determined, then the size of the 3D
reconstructed matrix is the same as in SPECT imaging. Non-axial collimation
PET adds oblique data with the potential of increasing the axial dimension.
In practice, the axial resolution is similar for axial collimation and non-axial
collimation PET. With cylindrical cameras, there are many more oblique angles
that can be detected on the centre slice than on end slices. In non-axial collimated
PET, the end slices are noisy. The non-uniform axial sampling is ameliorated in
429
CHAPTER 12
whole body imaging by overlapping the different bed positions, so that the data
are more uniformly sampled.
12.4.5. Gated acquisition
Data acquisition can be gated to a physiological signal. Owing to count
limitation, if the signal is repetitive, the data from multiple cycles are usually
summed together. A gated, static acquisition will have three dimensions
two spatial and one physiological. A gated dynamic acquisition will have four
dimensions two spatial, one time and one physiological. A gated SPECT
or PET acquisition will have a different four dimensions three spatial and
one physiological. A gated dynamic SPECT or PET acquisition will have five
dimensions three spatial, one time and one physiological. Count limitations
will often limit the dimension and/or the elements per dimension.
12.4.5.1. Cardiac-synchronized
Gated cardiac studies were one of the early studies in nuclear medicine.
Usually, the electrocardiogram is used to identify cardiac timing. The
R-wave is relatively easy to detect; however, at least a little knowledge of
electrocardiography is useful to know how to select electrode positions where the
R-wave has a high amplitude.
The timing of events during the cardiac cycle changes with cycle length.
The timing of contraction, systole, and the initial part of relaxation, diastole,
change relatively little as cycle length changes. Most of the change in cycle
length is a change in diastasis, the period between early relaxation and atrial
contraction. Atrial contraction, atrial systole, has a relatively constant relation
to the end of the cardiac cycle, i.e. to the next R-wave. Thus, for evaluation of
systolic and early diastolic events, it is better to sum cycles using constant timing
from the R-wave than to divide different cycles using a constant proportion of the
cycle length.
However, with constant timing, different amounts of data are collected in
later frames depending on the number of cycles that are long enough to reach
those frames. Normalizing the data for the acquisition time for each frame can
ameliorate this problem. Alternatively, systole and early diastole can be gated
forwards in time from the preceding R-wave, and atrial systole can be gated
backwards in time, joining the two during diastasis.
430
12.4.5.2. Respiratory-synchronized
Gating to respiration has been used in nuclear medicine both to study
respiratory processes and to ameliorate the blurring caused by breathing.
The signal to use for respiratory synchronization is not clear-cut. Successful
synchronization has been based on different types of plethysmography, on
chest or abdominal position changes, and on image data. Plethysmography,
measurement of the air in the lungs, can be measured by integrating the rate of
airflow or indirectly from changes in temperature of the air as it is breathed in
and out. Corrections often need to be made for errors that build up from cycle
to cycle. Electrical impedance across the chest changes with respiration and can
also be used to detect lung volume changes. Chest and abdominal motion often
correlate with the respiratory cycle, although detection of motion and changes
in diaphragmatic versus chest wall breathing also pose problems. Although
difficult to measure, respiratory gating is becoming more common, particularly
in quantitative and high resolution PET applications.
12.4.6. List-mode
Instead of collecting data as a sequence of matrixes or frames, the position
of each scintillation can be recorded sequentially in a list. Periodically, a special
mark is entered into the list that gives the time and may include physiological
data such as the electrocardiogram or respiratory phase.
List-mode data are generally much less efficient than matrix-mode data.
For example, a 1 million-count frame of data collected into a 256 by 256 matrix
will require 64000 memory locations. In list-mode, the same data will require
1 million memory locations. The reason list-mode is so much less efficient is
that it contains extra information, namely the order of arrival of each of the
scintillations. However, this information is not interesting. List-mode is more
efficient than matrix-mode when there is less than one event per pixel on average.
Since an image with one event per pixel is very noisy, it may be surprising
that list-mode is useful at all. However, with gated images, multiple individual
images are added to produce one output image. List-mode can be more efficient
if it is necessary to keep the separate images that make up one gated image for
post-processing.
12.5. FILE FORMAT
The nicest thing about standards is that there are so many of them to
choose from. Ken Olsen, founder of Digital Equipment Corp.
431
CHAPTER 12
smooth outlines. Vector graphics are used extensively in gaming software. Vector
graphics are not useful for nuclear medicine images, so this chapter will only
describe raster graphics.
Raster graphics images are made up of a matrix of picture elements or
pixels. Each pixel represents one small rectangle in the image. The primary data
in nuclear medicine are numbers of counts. Counts are translated into a shade of
grey or a colour for display. There are many ways the shade of grey or the colour
of a pixel can be encoded. One factor is the number of bits that are used for
each pixel. Grey scale images often have 8 bits or 1 byte per pixel. That allows
256 shades of grey, with values from 0 to 255.
A common way to encode colour images is in terms of their red, green and
blue (RGB) components. If each of the colours is encoded with 8 bits, 24 bits
or 3 bytes are needed per pixel. If each colour is encoded with 10 bits, 30 bits
are needed per pixel. RGB encoding is typical in nuclear medicine, but there
are other common encodings such as intensity, hue and saturation. For printing,
images need to be encoded in terms of cyan, magenta and yellow. Use of more
complicated encodings allows more vibrant images that use additional colours,
e.g. black, orange, green, silver or gold.
12.5.1.2.1. Transparency
Graphic user interfaces often allow a composition of images where
background images can be seen through a foreground image. In such cases, each
pixel in an image may be given a transparency value that determines how much
of the background will come through the foreground. A common format is 8 bits
for each of the RGB colours and an 8-bit transparency value, giving a total of 32
bits per pixel.
12.5.1.2.2. Indexed colour
Typically, an image will include only a small number of the 16 million (224)
possible colours in a 24 bit RGB palette. A common method of taking advantage
of this is to use indexed colour. Instead of storing the colour in each pixel, an
index value is stored. Typically, the index is 8 bits, allowing 256 colours to be
specified. In the case where more than 256 colours are used, it is often possible
to approximate the colour spectrum using a subset of colours. Sometimes, a
combination of colours in adjacent pixels will help to approximate a broader
spectrum. The algorithms for converting full spectrum RGB images into a limited
palette are remarkably good. It often requires very careful inspection of a zoomed
portion of the image to identify any difference.
433
CHAPTER 12
For indexed colour, a colour palette is stored with the images. The colour
palette has 256 colour values corresponding to the colours in the image. The
8 bit index value stored in a pixel is used to locate the actual 24 bit colour in
the colour palette, and that colour is displayed in the pixel. Each pixel requires
only 8 bits to store the index as opposed to 24 bits to store the actual colour. The
colour palette introduces an overhead of 8 bits for each of the 256 colours, but
since most images have tens of thousands or hundreds of thousands of pixels this
overhead is small.
12.5.1.2.3. Compression
The information content of an image is often much smaller than the
information capacity of an image format. For example, images often have large
blank areas or areas that all have the same colour value. Therefore, it is possible
to encode the image using less space. Improving efficiency for one class of
images results in decreasing efficiency for another type of image. The trick is to
pick an encoding which is a good match for the range of images that are typical
of a particular application.
One of the simplest and easiest encodings to understand is run-length
encoding. This method works well for images where large areas all have the
same value, e.g. logos. Instead of listing each pixel value, a value and the number
of times it is repeated are listed. If there are 50 yellow pixels in a row, rather than
listing the value for yellow 50 times, the value 50 is followed by the value for
yellow. Two values instead of 50 values need to be listed.
Another common encoding is called LempelZivWelch (LZW) after its
developers. It was originally under patent, but the patent expired on 20 June 2003,
and it may now be used freely. The LZW algorithm is relatively simple but
performs well on a surprisingly large class of data. It works well on most images
and works particularly well on images with low information content, logos and
line drawings. It is used in several of the file formats described below.
Both run-length encoding and LZW encoding are non-destructive or
reversible or lossless coding; the original image can be exactly reconstructed
from the encoded image. After destructive or irreversible or lossy coding, the
original image cannot be exactly recovered from the coded image. However,
much more efficient coding can be performed with destructive encoding. When
non-destructive encoding results in reduction of the image size by a factor of 23,
destructive encoding will often result in a reduction of the image size by a factor
of 1525. The trick is to pick an encoding system where the artefacts introduced
by the coding are relativelyminor.
The human visual system has decreased sensitivity to low contrast, very
high resolution variations. Details, high resolution variations, are conspicuous
434
only at high contrast. Discrete cosine transform (DCT) encoding takes advantage
of this property of the visual system. DCT encoding is in the Joint Photographic
Experts Group (JPEG) standard. The low spatial resolution content of the image
is encoded with high fidelity, and the high spatial resolution content is encoded
with low fidelity. The artefacts introduced tend to be low contrast detail. High
contrast detail, and both low and high contrast features at low resolution are
faithfully reproduced. Although there are artefacts introduced by the coding, they
are relatively inconspicuous for natural scenes. For nuclear medicine images, the
artefacts are more apparent on blown up images of text.
Wavelets are a generalized form of sinusoids. Some improvement in
compression ratios at the same level of apparent noise can be obtained using
wavelet-transform coding. The blocky artefacts that degrade DCT images are
not seen with wavelet-transform images. The JPEG 2000 standard uses wavelettransform coding.
Non-destructive coding systems often make use of the fact that adjacent
pixels are equal. The statistical noise in nuclear medicine images reduces the
similarity of adjacent pixels, thus reducing the utility of non-destructive coding.
Destructive coding may overcome this limitation, and since the statistical
variations generally do not carry any significant information, image quality may
not be greatly degraded.
12.5.2. Common image file formats
Common image file formats could be used for some or all of the raw nuclear
medicine image data. In fact, they are rarely used for this purpose. However,
secondary images, especially when used for distribution of image information,
generally use these standard file formats. This section will sketch the format of
these files, the advantages and disadvantages of the formats, and the typical uses
of these formats.
12.5.2.1. Bitmap
Bitmap (BMP) is a general term that may refer to a number of different
types of data. When used in reference to image formats, it may be used to mean
a raster graphic as opposed to a vector graphic format, but it often refers to a
Windows image format. The Windows file format is actually called a device
independent bitmap (DIB). The external DIB file format is distinguished from
various device dependent internal Windows BMPs. File name extensions .bmp
and .dib are used for the BMP image file format. BMP may be used to imply an
uncompressed format, but the DIB format defines several types of compression.
435
CHAPTER 12
436
The JPEG coding tries to match the coding to human vision. It uses more
precision for brightness than for hue. It uses more precision for low frequency
data than for high frequency detail. The JPEG format is particularly good at
compressing images of natural scenes, the type of images in routine photography.
All web browsers support this format, and it is very widely used in general
photography products. It is not a particularly good format for line drawings and
logos; the GIF format is better for these types of image.
12.5.3. Movie formats
Multitrack movie formats allow audio and video to be blended together.
Often, there are many audio tracks and many video tracks where the actual movie
is a blending of these sources of information. A key part of a multitrack format
is a timing track, which has information about how the tracks are sequenced and
blended in a final presentation. However, in nuclear medicine, there is rarely a
need for all of this capacity. Often, all that is needed is a simple sequence of
images. A more complex movie format can be used for this type of data, but often
a simpler format is easier to implement.
12.5.3.1. Image sequence formats
Of the image formats described so far, BMP, TIFF and GIF can be used
to define an image sequence. An extension of the JPEG format, JPEG 2000,
also allows image sequences. As with the multiframe version of BMP and
TIFF formats, JPEG 2000 is relatively poorly supported. However, the multiframe
version, 89a, of the GIF format is widely supported. It is supported by all web
browsers and by almost all image processing and display programs. The GIF
format is a logical choice for distribution of cine images.
12.5.3.2. Multitrack formats
There are several multitrack movie formats available. As newer formats
that are still under development, they tend to be controlled by particular vendors.
The AVI format is controlled by Microsoft, the QuickTime format is controlled
by Apple, the RealVideo format is controlled by RealNetworks and the Flash
format is controlled by Adobe.
12.5.4. Nuclear medicine data requirements
There are two types of information in a nuclear medicine study image
information and non-image information. As described in the previous section,
437
CHAPTER 12
there are several general purpose image and movie formats. Often, these formats
lack capabilities that would be optimal for medical imaging. However, some
formats, e.g. TIFF, are general enough to be used for the image portion of the
study data. The advantage of using a widely accepted format is that it allows the
great diversity of software available for general imaging to be adapted to medical
imaging.
12.5.4.1. Non-image information
There is unique nuclear medicine information that must be carried along
reliably with the images. This information includes: identification data, e.g. name,
medical record number (MRN); study data, e.g. type of study, pharmaceutical;
how the image was acquired, e.g. study date, view; etc. This information is
sometimes called meta-information. Most general image formats are not flexible
enough to carry this information along with the image.
12.5.4.1.1. American Standard Code for Information Interchange
Text information is usually and most efficiently coded in terms of character
codes. Each character, including punctuation and spacing, is coded as a number
of bits. Initially, 7 bits were used; 7 bits allowed 27=128 codes, which were
enough codes for the 26 small and 26 capital letters, plus a fairly large number
of symbols and control codes. Using 1 byte (8 bits) for each character meant that
the extra bit could be used for error checking using a scheme called parity. Error
checking each character became more of an issue, so the American Standard
Code for Information Interchange (ASCII) was extended to 8 bits, allowing the
addition of 128 new codes. Characters in many of the Latin languages could be
encoded using 8 bit ASCII.
12.5.4.1.2. Unicode
Computer usage has transcended national boundaries; it is now just as easy
to communicate with the other side of the Earth, as to communicate within a
single building. Internationalization has meant that a single server or client needs
to be multilingual. Multilingual programming is now common. The ASCII code
is not adequate for this task, so a multilingual code, Unicode, has superseded it.
For example, the Java programming language specifies that programs be written
in Unicode. Unicode uses 32 bits, and there are more than 100000 character
codes, including all common languages and many uncommon languages.
There are different ways to implement Unicode called, Unicode
Transformation Format (UTF) encodings. One of the most common is a system
438
called UTF-8, which uses a variable length coding system. Different numbers of
bytes from one to four are used for different character codes. If the first byte in
a code has a zero in the highest bit, then 1 byte is used. If the first byte in a code
has a one in the highest bit, then subsequent bytes are used. The single byte codes
are identical to the 7-bit ASCII codes. For a document that only includes the
characters in the 7-bit ASCII system, encoding in ASCII and UTF-8 are identical.
This results in an efficient encoding of Latin languages. In addition, any other
Unicode character Greek, Japanese or symbol can be included occasionally
in a predominantly Latin text while maintaining average coding efficiency.
12.5.4.1.3. Markup language
The difference between text editors and word processors is that the former
largely edit the characters, while the latter define how the document will appear
the size of the text, the spacing, indentations, etc. For a word processor, the
layout of the document as it is entered is the same as the layout when it is printed.
In computer jargon, what you see is what you get (WYSIWYG).
Markup has the advantage that it can be read by humans and can be edited
with any text editor. Hypertext Markup Language (HTML), the language used
by the World Wide Web, was originally a markup language. When HTML was
first developed, text editors were used to writing it. Page layout capabilities were
added, and now WYSIWYG editors are generally used. A markup language
allows other types of information to be included. For example, one of the key
features of HTML is that it includes hyperlinks to other HTML pages on the
Internet.
12.5.4.1.4. Extensible Markup Language
Currently, a very popular and increasingly used method for encoding text
information is a standard called Extensible Markup Language (XML). It provides
a method of producing machine-readable textual data. For example, the Real
Simple Syndication (RSS) standard is encoded with XML. A common use of the
RSS standard is for web publishing. Almost all newspaper web sites use RSS to
communicate with subscribers.
XML is not a markup language itself, but rather a general format for
defining markup languages. It defines how the markup is written. A document
is said to be well formed if it follows the XML standard. If it is well formed,
then the markup can be separated from the rest of the text. More information,
provided by an XML schema, defines a particular markup language. The most
recent version of HTML, XHTML, is fully compatible with the XML standard.
439
CHAPTER 12
For nuclear medicine file formats, one of the key properties of XML is
that it can be used to make text information readable by machine. For example,
consider the following section of an XML document:
<patient>
<name>
<last>Parker</last>
<first>Tony</fist>
</name>
<medical_record_number>10256892</medical_record_number>
</patient>
It would be straightforward for a computer to unambiguously determine
the name and MRN from this document. Human readability and text editor
processing of XML make it highly self-documenting. Owing to its wide
acceptance in the computer world, XML is the most appropriate current format
for storing non-imaging nuclear medicine information.
XML can be used to store numerical data as characters,
e.g. <number>12.56</number>. The scalable vector graphics format for vector
graphics is an XML language. However, XML is too inefficient for coding raster
graphics image information.
12.5.4.2. Non-image information in general formats
(a) JPEG 2000: An extension of the JPEG format, JPEG 20001, has several
very interesting capabilities. The JPEG 2000 standard is a general purpose
image format; however, it was developed with medical imaging as a
potential application. Since it uses wavelet-transform compression, the
image quality at high compression ratios is considerably better than with
DCT compression. JPEG 2000 allows multiple frames, so it could be used
for cine; it has an advantage over GIF of providing good compression
of information rich images such as natural scenes. However, the reason
that it is included in this section is its capability to carry considerable
meta-information in tight association with the image data. Although this
format was defined nearly a decade ago, it has not found wide acceptance.
The JPEG 2000 wavelet-transform coding is allowed in the Digital Imaging
and Communications in Medicine (DICOM) standard, but mass market
software, such as browsers, generally do not support this standard. It has
440
http://www.jpeg.org/jpeg2000/
CHAPTER 12
255 or 65535 counts per pixel are required. Since processed data may require a
larger dynamic range, floating point or complex data may be appropriate in many
circumstances. For some analysis, signed integers may be most appropriate.
A region of interest can be represented as a single bit raster or with vector
graphics.
The selection of a small number of data element formats would simplify the
programming task. The trick is to limit the number of formats while maintaining
all of the functionality. It would be logical to select some standard set of data
element formats, especially a set of formats associated with a widely accepted
data standard. However, it is not clear that such a standard exists. At aminimum,
16 bit integer and floating point formats are required. Probably signed and
unsigned 8 and 16 bit formats should be included. It is not clear whether
increasing complexity by including a vector graphics format is worth the added
functionality.
12.5.4.3.3. Organization
A logical first level of organization of the image data is what will be called
a dataset. A dataset is an n dimensional set of data in which each of the data
elements is the same format, e.g. 8 bit unsigned integer, 16 bit signed integer
and IEEE 32 bit floating point data. The key characteristic is that the dataset is
a sequence of identical elements. The dimensions do not need to be the same;
7 sets of 100 curves with 256 points is a 7 100256 dataset.
The dataset should be the atom of the data; there should not be a lower
level of organization. The lower levels of organization will depend on the type of
data and the dimensions. For example, a 256256256 tomographic volume
could be considered as 256 axial slices of 256256, but it would be equally
valid to consider that dataset as 256 coronal slices of 256256. The organization
of a dataset consisting of 4D PET lines of response will depend entirely on
the reconstruction algorithm. The lower levels of organization depend on the
non-image information and should not be part of the image data format.
12.5.5. Common nuclear medicine data storage formats
12.5.5.1. Interfile
The Interfile format has been used predominantly in nuclear medicine.
The final version of Interfile, version 3, was defined in 1992. Although Interfile
has been largely replaced by DICOM, it has some interesting properties. The
metadata, encoded in ASCII, are readable and editable by any text editor. The
lexical structure of the metadata was well defined, so that it is computer readable.
442
CHAPTER 12
the accession number for a customer table would be the customer number. Some
databases use a combination of fields as the accession number, but the basic idea
is the same, there must be something unique which identifies each row.
12.6.1.2. Index
An index provides rapid access to the records (rows) of a table. The index
is separate from the table and is sorted to allow easy searching, often using a
tree structure. The tables themselves are usually not sorted; the records are just
added one after another. The index, not the table provides organization of the
records for fast access. Indexes are one of the important technologies provided by
a database vendor.
Indexes can be complicated and complex. However, there is no information
stored in an index. An index can be rebuilt from the tables. In fact, when
transferring data to a new database, the indexes are usually rebuilt. Indexes
are important for efficiency, but the only information they contain is how to
efficiently access the tables.
12.6.1.3. Relation
A key element of relational databases is relations. Relations connect one
table to another table. Conceptually, the relations form a very important part of a
database, and provide much of the complexity. However, the relations themselves
are actually very simple. A relation between the patient table and the study table
might be written:
patient.MRN=study.MRN
This says that the records in the patient table are linked to the records in the
study table by the MRN field.
A physician thinks of the database in terms of patients who have studies that
involve radiopharmaceuticals; however, a radiopharmacist thinks of isotopes and
radiopharmaceuticals. The relations facilitate these different points of view. The
physician can access the database in terms of patients and the radiopharmacist
can access the database in terms of radiopharmaceuticals. The same tables are
used; it is just the small amount of information contained in the relations that is
different.
444
445
CHAPTER 12
CHAPTER 12
implementations of the theory often have security holes that hackers can exploit.
Furthermore, with aggregation of private health information, the potential extent
of a security breach becomes catastrophic. Security depends not only on computer
systems, but also on humans who are often the weak link.
There needs to be a balance between the damage caused due to a security
breach and the expense of the security measures. Often, relatively low damage
situations are addressed with overly expensive systems, especially in terms
of lost productivity. The dominant effect of security should not be to prevent
authorized users from accessing information. Nuclear medicine tends to be a
relatively low risk environment, so in most circumstances the balance should
favour productivity over security.
BIBLIOGRAPHY
HUTTON, B.F., BARNDEN, L.R., FULTON, R.R., Nuclear medicine computers, Nuclear
Medicine in Clinical Diagnosis and Treatment (ELL, P.J., GAMBHIR, S.S., Eds), Churchill
Livingston, London (2004).
LEE, K.H., Computers in Nuclear Medicine: A Practical Approach, Society of Nuclear
Medicine, 2nd edn (2005).
PARKER, J.A., Image Reconstruction in Radiology, CRC Press, Boston, MA (1990).
PIANYKH, O.S., Digital Imaging and Communications in Medicine (DICOM): A Practical
Introduction and Survival Guide, Springer, Berlin (2008).
SALTZER, J.H., KAASHOEK, M.F., Principles of Computer System Design: An Introduction,
Morgan Kaufmann, Burlington (2009).
TODD-POKROPEK, A., CRADDUCK, T.D., DECONINCK, F., A file format for the
exchange of nuclear medicine image data: a specification of Interfile version 3.3, Nucl. Med.
Commun. 13 (1992) 673699.
448
CHAPTER 13
IMAGE RECONSTRUCTION
J. NUYTS
Department of Nuclear Medicine and Medical Imaging Research Center,
Katholieke Universiteit Leuven,
Leuven, Belgium
S. MATEJ
Medical Image Processing Group,
Department of Radiology,
University of Pennsylvania,
Philadelphia, Pennsylvania,
United States of America
13.1. INTRODUCTION
This chapter discusses how 2D or 3D images of tracer distribution can
be reconstructed from a series of so-called projection images acquired with a
gamma camera or a positron emission tomography (PET) system [13.1]. This
is often called an inverse problem. The reconstruction is the inverse of the
acquisition. The reconstruction is called an inverse problem because making
software to compute the true tracer distribution from the acquired data turns out
to be more difficult than the forward direction, i.e. making software to simulate
the acquisition.
There are basically two approaches to image reconstruction: analytical
reconstruction and iterative reconstruction. The analytical approach is based
on mathematical inversion, yielding efficient, non-iterative reconstruction
algorithms. In the iterative approach, the reconstruction problem is reduced
to computing a finite number of image values from a finite number of
measurements. That simplification enables the use of iterative instead of
mathematical inversion. Iterative inversion tends to require more computer
power, but it can cope with more complex (and hopefully more accurate) models
of the acquisition process.
449
CHAPTER 13
z
s
FIG.13.1. The relation between projections and sinograms in parallel-beam projection. The
parallel-beam (PET) acquisition is shown as a block with dimensions s, and z. A cross-section
at fixed yields a projection; a cross-section at fixed z yields a sinogram.
IMAGE RECONSTRUCTION
(13.1)
where the function is unity for the points on the LOR (s, ) and zero elsewhere.
It should be noted that with the notation used here, =0 corresponds to projection
along the y axis.
The radon transform describes the acquisition process in 2D PET and in
SPECT with parallel-hole collimation, if attenuation can be ignored. Assuming
that (x,y) represents the tracer distribution at transaxial slice Z through the
patient, then Y(s, ) represents the corresponding sinogram, and contains the z-th
row of the projections acquired at angles . Figure13.1 illustrates the relation
between the projection and the sinogram.
The X ray transform has an adjoint operation that appears in both analytical
and iterative reconstruction. This operator is usually called the back projection
operator, and can be written as:
B( x, y) = Backproj (Y( s, ))
=
=
Y( x cos + y sin , ) d
451
CHAPTER 13
The back projection is not the inverse of the projection, B(x,y) (x,y).
Intuitively, the back projection sends the measured activity back into the image
by distributing it uniformly along the projection lines. As illustrated in Fig.13.2,
projection followed by back projection produces a blurred version of the original
image. This blurring corresponds to the convolution of the original image with
the 2D convolution kernel 1 / x 2 + y 2 .
FIG.13.2. The image (left) is projected to produce a sinogram (centre), which in turn is back
projected, yielding a smoothed version of the original image.
( F1Y)( s ,0) =
=
452
( s, t ) dt
Y( s,0)e i2 s s ds
(13.3)
( s, t )e
i2 s s
(13.4)
dt ds
IMAGE RECONSTRUCTION
( x, y)e
i2 ( x x+ y y)
dx dy (13.5)
(13.6)
(F1Y)(s, 0) is the 1D Fourier transform of the projection along the y axis and
(F2)(x, 0) is a central slice along the x axis through the 2D Fourier transform
of the image. Equation (13.6) is the central slice theorem for the special case of
projection along the y axis. This result would still hold if the object had been
rotated or equivalently, the x and y axes. Consequently, it holds for any angle :
(F1Y)(s, )=(F2)(scos , ssin )
(13.7)
( F2 )( x , y )e
i2 ( x x+ y y)
d x d y (13.8)
453
CHAPTER 13
Application of the central slice theorem (Eq.(13.7)) and reversing the order
of integration finally results in:
( x, y) =
which is the FBP algorithm. This algorithm involves the following steps:
(a) Apply 1D Fourier transform to Y(s, ) to obtain (F1Y)(, );
(b) Filter (F1Y)(, ) with the so-called ramp filter ||;
(c) Apply the 1D inverse Fourier transform to obtain the ramp filtered
s, ) = ( F )( , ) e i2 s d ;
projections Y(
1
s, ) to obtain the
(d) Apply the back-projection operator Eq.(13.2) to Y(
desired image (x,y).
It should be noted that the ramp filter sets the DC component (i.e. the
amplitude of the zero frequency) of the image to zero, while the mean value of
the reconstructed image should definitely be positive. As a result, straightforward
discretization of FBP causes significant negative bias. The problem is reduced
with zero padding before computing the Fourier transform with fast Fourier
transform (FFT). Zero padding involves extending the sinogram rows with zeros
at both sides. This increases the sampling in the frequency domain and results
in a better discrete approximation of the ramp filter. However, a huge amount
of zero padding is required to effectively eliminate the bias completely. The
next paragraph shows how this need for zero padding can be easily avoided. It
should be noted that after inverse Fourier transform, the extended region may be
discarded, so the size of the filtered sinogram remains unchanged.
Instead of filtering in the Fourier domain, the ramp filtering can also be
implemented as a 1D convolution in the spatial domain. For this, the inverse
Fourier transform of || is required. This inverse transform actually does not exist,
but approximating it as the limit for 0 of the well behaved function ||e||
gives [13.3, 13.4]:
F 1( e ) =
2 (2 s) 2
(13.11)
( 2 + (2 s) 2 ) 2
1
(2 s) 2
for
s (13.12)
In practice, band limited functions are always worked with, implying that the
ramp filter has to be truncated at the frequencies =1/(2), where represents
454
IMAGE RECONSTRUCTION
2
1 sin( s / )
1 sin( s / (2 ))
2 2 s /
4 2 s / (2 ) (13.13)
( F2 )( x , y ) = x2 + y2 ( F2 B)( x , y ) (13.15)
455
CHAPTER 13
IMAGE RECONSTRUCTION
d
x
CHAPTER 13
the so-called LORs, where each pair of detectors in coincidence defines a single
LOR. In this section, the discrete nature of the detection is ignored, since the
analytical approach is more easily described assuming continuous data. Consider
the X ray transform in 3D, which can be written as:
Y(u , s) =
( s + tu ) dt (13.17)
where the LOR is defined as the line parallel to and through the point s. The
vector is a unit vector, and the vector s is restricted to the plane orthogonal to ,
hence (, s) is 4D.
Most PET systems are either constructed as a cylindrical array of detectors
or as a rotating set of planar detector arrays and, therefore, have cylindrical
symmetry. For this reason, the inversion of Eq.(13.17) is studied for the case
where is restricted to the band 0 on the unit sphere, defined by u z sin 0 ,
as illustrated in Fig.13.4. It should be noted that only half of the sphere is actually
needed because Y(, s)=Y(, s), but working with the complete sphere is more
convenient.
FIG.13.4. Each point on the unit sphere corresponds to the direction of a parallel projection.
An ideal rotating gamma camera with a parallel-hole collimator only travels through the
points on the equator. An idealized 3D PET system would also acquire projections along
oblique lines; it collects projections for all points of the set . The set , defined by 0, is
the non-shaded portion of the unit sphere. To recover a particular frequency (of the Fourier
transform of the object), at least one point on the circle C is required.
IMAGE RECONSTRUCTION
|0| > 0, the problem becomes overdetermined, and there are infinitely many ways
to compute the solution. This can be seen as follows. Each point of corresponds
to a parallel projection. According to the central slice theorem, this provides a
central plane perpendicular to of the 3D Fourier transform L() of (x). Thus,
the set 0 (i.e. all points on the equator of the unit sphere in Fig.13.4) provides
all planes intersecting the z axis, which is sufficient to recover the entire image
(x) via inverse Fourier transform. The set 0 with 0 > 0 provides additional
(oblique) planes through L(), which are obviously redundant. A simple solution
would be to select a sufficient subset from the data. However, if the data are
noisy, a more stable solution is obtained by using all of the measurements. This
is achieved by computing L() from a linear combination of all available planes:
L( ) =
if sin sin 0
(13.20)
where is the angle between and the z axis: z/||=cos . The direct Fourier
reconstruction method can be applied here, by straightforward inverse Fourier
transform of Eq.(13.18). However, an FBP approach is usually preferred, which
can be written as:
( x ) =
du Y F (u , x ( x u )u ) (13.21)
0
459
CHAPTER 13
Here, YF is obtained by filtering Y with the Colsher filter (or another filter
satisfying Eq.(13.19): Y F (u , s) = F 1(H C (u , )Y (u , )). The coordinate
s = x ( x u )u is the projection of the point x on the plane perpendicular to ; it
selects the LOR through x in the parallel projection .
13.2.3.2. The reprojection algorithm
The previous analysis assumed that the acceptance angle 0 was a constant,
independent of x. As illustrated in Fig.13.5, this is not the case in practice. The
acceptance angle is maximum for the centre of the FOV, becomes smaller with
increasing distance to the centre and vanishes near the axial edges of the FOV. In
other words, the projections are complete for orthogonal to the z axis (these are
the 2D multislice parallel-beam projections) and are truncated for the oblique
parallel projections. The truncation becomes more severe for more oblique
projections (Fig.13.5).
(a)
(b)
(c)
FIG.13.5. An axial cross-section through a cylindrical PET system, illustrating that the
acceptance angle is position dependent (a). Oblique projections are truncated (b). In the
reprojection algorithm, the missing oblique projections (dashed lines) are computed from a
temporary multislice 2D reconstruction (c).
460
IMAGE RECONSTRUCTION
will not be used. A good compromise betweenminimum data loss and practical
implementation must be sought [13.9].
Another approach is to start with a first reconstruction, using the smallest
acceptable angle over all positions x in the FOV. This usually means that only
the parallel projections orthogonal to the z axis are used. The missing oblique
projection values are computed from this first reconstruction (Fig.13.4) and used
to complete the measured oblique projections. This eliminates the truncation, and
the 3D FBP method of the previous section can be applied. This method [13.10]
was the standard 3D PET reconstruction method for several years, until it was
replaced by the faster Fourier rebinning approach (seebelow).
13.2.3.3. Rebinning techniques
The complexity (estimated as the number of LORs) increases linearly
with the axial extent for 2D PET, but quadratically for 3D PET. To keep the
processing time acceptable, researchers have sought ways to reduce the size of
the data as much as possible, whileminimizing the loss of information induced
by this reduction.
Most PET systems have a cylindrical detector surface: the detectors are
located on rings with radius R, and the rings are combined in a cylinder along the
z axis. The data are usually organized in sinograms which can be written as:
YP ( s, , z, z ) =
with
u = ( sin ,cos , z / (2 R 2 s 2 ))
The parameter s is the distance between the LOR and the z axis. The LOR
corresponds to a coincidence between detector points with axial positions z z/2
and z + z/2. Finally, is the angle between the y axis and the projection of the
LOR on the xy plane. The coordinates (s, , z) are identical to those often used in
2D tomography. It should be noted that, in practice, s R and, as a result, the
direction of the LOR, the vector , is virtually independent of s. In other words,
a set of LORs with fixed z can then be treated as a parallel projection with good
approximation. LORs with z =0 are often called direct LORs, while LORs
with z 0 are called oblique.
The basic idea of rebinning algorithms is to compute estimates of the
direct sinograms from the oblique sinograms. If the rebinning algorithm is good,
most of the information from the oblique sinograms will go into these estimates.
461
CHAPTER 13
As a result, the data have been reduced from a complex 3D geometry into a
much simpler 2D geometry without discarding measured signal. The final
reconstruction can then be done with 2D algorithms, which tend to be much
faster than fully 3D algorithms. A popular approach is to use Fourier rebinning,
followed by maximum-likelihood reconstruction.
13.2.3.4. Single slice and multislice rebinning
The simplest way to rebin the data is to treat oblique LORs as direct LORs
[13.11]. This corresponds to the approximation:
YP ( s, , z, z ) YP ( s, , z,0) (13.23)
462
IMAGE RECONSTRUCTION
where
=tan ;
is the angle between the LOR and the xy plane;
and the integration variable t is the distance between the position on the LOR and
the z axis.
It follows that:
Y(s, , z, ) =
YP ( s, , z, z = 2 R 2 s 2 )
1+ 2
YP (s, , z, z = 2 R)
1+ 2
(13.25)
(13.26)
,0) (13.27)
463
CHAPTER 13
FIG.13.6. Fourier rebinning: the distance from the rotation axis is obtained via the frequency
distance principle. This distance is used to identify the appropriate direct sinogram.
= s in the oblique sinogram z can be assigned to that same line in the direct
sinogram z+d.
max
s 0
Y (0,0, z,0)
if
s 0, 0 (13.28)
if
/ s > R f
d Y ( s , , z +
if
max
, )
It should be noted that the rebinning expression is only valid for large s.
In the low frequency range, only the direct sinogram is used. The last line of
Eq.(13.28) holds because the image (x, y, z) is assumed to be zero outside the
FOV x 2 + y 2 > R f .
A more rigorous mathematical derivation of the frequencydistance
relation is given in Ref.[13.14]. Alternative derivations based on exact rebinning
expressions are given in Ref.[13.13].
464
IMAGE RECONSTRUCTION
(13.29)
The subscript of Y13 denotes a Fourier transform with respect to s and z. Defining:
= arctan( z / s )
' s = s 1 + 2v z2 s2
2
0
(13.31)
CHAPTER 13
466
1
2
x + y2
(13.33)
IMAGE RECONSTRUCTION
Gauss ( x, y, 2 TOF )
x 2 + y2
(13.34)
x 2 + y 2
1
(13.35)
exp
2
4 TOF
x 2 + y 2 2 TOF
1
It should be noted that the Gaussian in the equation above has a standard deviation
of 2 TOF . This is because the Gaussian blurring is present in the projection
and in the back-projection. The filter required in TOF PET FBP is derived by
inverting the Fourier transform of BTOF, and equals:
TOF_recon_filter ( ) =
1
2
2
exp(2 2 TOF
2 )I 0 (2 2 TOF
2)
(13.36)
where I 0 is the zero order modified Bessel function of the first kind.
This FBP expression is obtained by using the natural TOF back projection,
defined as the adjoint of the TOF projection. This back projection also appears in
LS approaches, and it has been shown that with this back projection definition,
FBP is optimal in an (unweighted) LS sense [13.15]. However, TOF PET data
are redundant and different back projection definitions could be used; they would
yield different expressions for BTOF(x,y) in Eq.(13.34) and, therefore, different
TOF reconstruction filters.
Just as for non-TOF PET, exact and approximate rebinning algorithms for
TOF PET have been derived to reduce the data size. As the TOF information
limits the back projection to a small region, the errors from approximate rebinning
are typically much smaller than in the non-TOF case.
467
CHAPTER 13
or
yi =
A
ij
+ bi + n i ,
i = 1,..., I
(13.37)
j=1
The symbol yi denotes the number of photons measured at LOR i, where the
index i runs over all of the sinogram elements (merging the three or four sinogram
dimensions into a single index). The index j runs over all of the image voxels,
and Aij is the probability that a unit of radioactivity in j gives rise to the detection
of a photon (SPECT) or photon pair (PET) in LOR i. The estimate of the additive
contribution is denoted as b . This estimate is assumed to be noise-free and
includes, for example, scatter and randoms in PET or cross-talk between different
energy windows in multitracer SPECT studies. Finally, ni represents the noise
contribution in LOR i.
Image reconstruction now consists of finding , given A, y and b , and a
statistical model for n.
468
IMAGE RECONSTRUCTION
For further reading about this subject, the recent review paper on iterative
reconstruction by Qi and Leahy [13.16] is an ideal starting point.
13.3.1.2. Objective functions
The presence of the noise precludes exact reconstruction. For this reason,
the reconstruction is often treated as an optimization task: it is assumed that a
useful clinical image can be obtained by maximizing a well chosen objective
function. When the statistics of the noise are known, a Bayesian approach can be
applied, searching for the image l that maximizes the conditional probability on
the data:
= arg max p( | y)
= arg max
p( y | ) p( )
(13.38)
p( y)
= arg max p( y | ) p( )
The second equation is Bayes rule. The third equation holds because y does not
depend on , and the fourth equation is valid because computing the logarithm
does not change the position of the maximum. The probability p(y|) gives the
likelihood of measuring a particular sinogram y, when the tracer distribution
equals . This distribution is often simply called the likelihood. The probability
p() represents the a priori knowledge about the tracer distribution, available
before PET or SPECT acquisition. This probability is often called the prior
distribution. The knowledge available after the measurements equals p(y|)p()
and is called the posterior distribution. To keep things simple, it is often assumed
that no prior information is available, i.e. p(|y) p(y|). Finding the solution
then reduces to maximizing the likelihood p(y|) (or its logarithm). In this
section, maximum-likelihood algorithms are discussed. Maximum a posteriori
(MAP) algorithms are discussed in Section 13.3.5, as a strategy to suppress noise
propagation.
A popular approach to solve equations of the form of Eq.(13.37) is LS
estimation. This is equivalent to a maximum-likelihood approach, if it is assumed
that the noise is Gaussian with a zero mean and a fixed, position independent
469
CHAPTER 13
standard deviation . The probability to measure the noisy value yi when the
Aij l j + bi then equals:
expected value was
pLS ( y i |
( y (
1
Aij l j + bi ) =
exp
2
A l
ij
2 2
+ bi )) 2
(13.40)
As the noise in the sinogram is not correlated, the likelihood (i.e. the probability
of measuring the entire noisy sinogram y) equals:
pLS ( y | ) = pLS ( y | A + b ) =
p
i
LS ( y i
A
ij
+ bi ) (13.41)
( y ( A
LLS =
ij
+ bi )) 2 = ( y ( A + b ))'( y ( A + b ))
(13.42)
where the prime denotes matrix transpose. Setting the first derivatives with
respect to j to zero for all j gives:
A'( y A b ) = 0
(13.43)
A A
FIG.13.8. The image of point sources is projected and back projected again along ideal
parallel beams. This yields a shift-invariant blurring.
470
IMAGE RECONSTRUCTION
A A
FIG.13.9. The image of point sources is projected and back projected again with collimator
blurring. This yields a shift-variant blurring.
LWLS =
( yi (
A + b ))
ij j
i2
(13.44)
= ( y ( A + b ))'C y 1( y ( A + b ))
471
CHAPTER 13
Ax
Cy1Ax
A'Cy1Ax
FIG.13.10. The operator A'C y A is derived for a particular activity distribution (top left)
and then applied to a few point sources x. Although ideal parallel-beam projection was used,
shift-variant blurring is obtained.
1
IMAGE RECONSTRUCTION
e
pML ( y i |
A
ij
Aij j +bi )
j
A
ij
+ bi ) y i
+ bi ) =
(13.46)
yi !
ln
pLS ( y i |
LML =
Aij j + bi ) =
y ln( A
i
ij j
+ bi ) (
y ln( A
i
ij
ij j
+ bi ) (
A
ij
+ bi ) ln y i !
+ bi ) (13.47)
It should be noted that the term ln yi! can be dropped, because it is not a function
of . As LML is a non-linear function of , the solution cannot be written as a
product of matrixes. However, it is sometimes helpful to know that the features
of the Poisson-objective function are often very similar to those of the WLS
function (Eq.(13.44)).
13.3.2. Optimization algorithms
Many iterative reconstruction algorithms have been proposed to optimize
the objective functions LWLS and LML. Here, only two approaches are briefly
described: preconditioned conjugate gradient methods and optimization transfer,
with expectation maximization (EM) as a special case of the latter.
13.3.2.1. Preconditioned gradient methods
The objective function will be optimized when its first derivatives are zero:
y i =
ij j
+ bi (13.48)
473
CHAPTER 13
LWLS ( )
=
j
LML ( )
=
j
ij
y i y i
(13.49)
i2
y i y i
(13.50)
y i
ij
The optimization can be carried out by a steepest ascent method, which can be
formulated as follows:
d k = L( k1 )
(13.51)
k = k1 + k d k
k = arg max L( k1 + d k )
(13.52)
p k H 1L( k1 ) = H 1d k
for WLS: H jk =
for ML: H jk =
Aij Aik
i2
Aij Aik y i
y i2
y i
( A'C y 1 A)[ j, k ]
474
Aij Aik
if
(13.54)
y y (13.55)
IMAGE RECONSTRUCTION
(13.56)
=
k
k1
+ k Md
475
CHAPTER 13
FIG.13.11.
dotted
lines
isocontours
of objective
the objective
function.
The line
solidshows
line shows
FIG.
13.11. TheThe
dotted
lines
are are
isocontours
of the
function.
The solid
convergence
of the
steepest
gradient
ascent
algorithm,
the dashed
line
the convergence
of
thethe
convergence
of the
steepest
gradient
ascent
algorithm,
the dashed
line the
convergence
of
conjugent
gradient
ascent.
It should
noted
starting
points
equivalent
because of
conjugent
gradient
ascent.
It should
be be
noted
thatthat
thethe
starting
points
are are
equivalent
because
ofthe
the symmetry.
symmetry. The objective
p=
2.15.
, with
p=2.15.
objectivefunction
functionequals
equals(a(a| x| x x0x|0p | pb+b| y| y y0y|0p )| p,)with
gradient
will change
(using aisquadratic
as Hd
new. Requiring
The conjugate
gradient algorithm
designed toapproximation)
avoid these oscillations
[13.18].
The first that
the resulting
change
along
doldsteepest
is zero gradient
yields the
condition:
iteration
is identical
to that
of the
ascent.
However, in the following
iterations, the algorithm attempts to move in a direction for which the gradient along the
previous direction(s) remains
the same (i.e. equal to zero). The idea is to eliminate the need
(13.57)
d ' Hd new = 0
for aold
new optimization along these previous directions. Let dold be the previous direction and
H the Hessian matrix (i.e. the second derivatives). It is now required that the new direction
dnew be such
the gradient
dold doesbynotthe
change.
When
moving
in directionin
dnew
, the
Thisthat
behaviour
isalong
illustrated
dashed
line
in Fig.13.11:
the
second
gradient will change (using a quadratic approximation) a Hdnew. Requiring that the resulting
iteration,
the
algorithm
moves
in
a
direction
such
that
the
trajectory
cuts
the
change along dold is zero yields the condition:
isocontours at the same angle as in the starting point. For a quadratic function
(13.57)As the
H d new = 0. convergence is obtained after no more than n iterations.
in dn' olddimensions,
function in Fig.13.11 is not quadratic, more than two iterations are required for
This behaviour is illustrated by the dashed line in Fig. 13.11: in the second iteration, the
full convergence.
algorithm moves in a direction such that the trajectory cuts the isocontours at the same angle
newpoint.
direction
can be easily
computed
from the
previousisones,
without
as in theThe
starting
For a quadratic
function
in n dimensions,
convergence
obtained
after
no more thanofn the
iterations.
As the
in Fig. 13.11 is algorithm
not quadratic,
Hessian
H .function
The PolakRibiere
is more
giventhan
bytwo
[13.18]:
computation
iterations are required for full convergence.
The new direction can be easily computed from the previous ones, without computation of
new =
old )PolakRibiere algorithm is given by [13.18]:
thegHessian
HL(. The
476
(13.58)
IMAGE RECONSTRUCTION
This algorithm requires storage of the previous gradient gold and the
previous search direction dold. In each iteration, it computes the new gradient and
search direction, and applies a line search along the new direction.
13.3.2.3. Preconditioned conjugate gradient methods
Both techniques mentioned above can be combined to obtain a fast
reconstruction algorithm, as described in Ref.[13.17]. The preconditioned
conjugate gradient ascent algorithm (with preconditioning matrix M) can be
written as follows:
g new = L( old )
p new = Mg new
(13.59)
477
CHAPTER 13
Likelihood
Current
New
478
IMAGE RECONSTRUCTION
x
i
ij
However, this likelihood cannot be computed, because the data xij are not
available. The emission measurement only produces sums of the complete data,
since:
yi =
A x
ij ij
+ bi (13.64)
yi
Aij j(k ) + bi
479
CHAPTER 13
(k )
A
A
= 0
ij
j
ij
(k )
j
Aij j + bi
yi
(13.66)
j(k )
yi
A
A
Aij
ij
ij
(k )
j
+ bi
(13.67)
480
IMAGE RECONSTRUCTION
j(k )
iLORs
Aij
mevent-list
Ai
mj
A
j
(k )
i m j j + bi m
(13.68)
where im represents the LOR index in which the m-th event has been recorded.
The main difference is that the MLEM sum is now evaluated (including
calculations of the relevant forward and back projections) only over the list of
the available events (in any order). However, it is important to mention here that
the normalizing term in front of the sum (sensitivity matrix
A ) still has to
i ij
be calculated over all possible LORs, and not only those with non-zero counts.
This represents a challenge for the attenuated data (attenuation considered as
part of the system matrix A), since the sensitivity matrix has to be calculated
specifically for each object and, therefore, it cannot be pre-computed. For modern
systems with a large number of LORs, calculation of it often takes more time
than the list-mode reconstruction itself. For this reason, alternative approaches
(involving certain approximations) have been considered for the calculation of
the sensitivity matrix, such as subsampling approaches [13.24] or Fourier based
approaches [13.25].
481
CHAPTER 13
Projection
(LOR-binned events)
Projection
(LOR-binned events)
Histo-Image
Histo-Projection
(image-binned events)
FIG. 13.13. Comparison of the data formats for binned time of flight (TOF) data
(left: histo-projection for a 45 view) and for the DIRECT (direct image reconstruction for
TOF) approach (right: histo-image for a 45 view). Histo-projections can be viewed as an
extension of individual non-TOF projections into TOF directions (time bins), and their
sampling intervals relate to the projection geometry and timing resolution. Histo-images are
defined by the geometry and desired sampling of the reconstructed image. Acquired events
and correction factors are directly placed into the image resolution elements of individual
histo-images (one histo-image per view) having a one to one correspondence with the
reconstructed image voxels.
The TOF mode of operation has some practical consequences (and novel
possibilities) for the ways the acquired data are stored and processed. The
482
IMAGE RECONSTRUCTION
list-mode format is very similar to the non-TOF case. The event structure is just
slightly expanded by a few bits (58 bits/event) to include the TOF information,
and the events are processed event by event as in the non-TOF case.
On the other hand, the binned data undergo considerable expansion when
accommodating the TOF information. The projection (X ray transform) structures
are expanded by one dimension, that is, each projection bin is expanded in the
LOR direction into the set of time bins forming the so-called histo-projections
(seeFig.13.13 (left)). In practice, the effect of this expansion on the data size is
not as bad as it appears, because the localized nature of TOF data allows decreased
angular sampling (typically about 510 times) in both azimuthal and co-polar
directions (views), while still satisfying angular sampling requirements. The
resulting data size, thus, remains fairly comparable to the non-TOF case. During
the reconstruction process, the histo-projection data are processed time-bin
by time-bin (instead of projection line by line in the non-TOF case). It should
be noted that hybrid approaches also exist between the two aforementioned
approaches, in which the data are binned in the LOR space, but events are stored
in list-mode for each LOR bin.
TOF also allows a conceptually different approach of data partitioning,
leading to more efficient reconstruction implementations, by using the DIRECT
(direct image reconstruction for TOF) approach utilizing so-called histo-images
(seeFig.13.13 (right)) [13.25]. In the DIRECT approach, the data are directly
histogrammed (deposited), for each view, into image resolution elements
(voxels) of desired size. Similarly, all correction arrays and data are estimated
or calculated in the same histo-image format. The fact that all data and image
structures are now in image arrays (of the same geometry and size) makes
possible very efficient computer implementations of the data processing and
reconstruction operations.
13.3.3.4. Reconstruction of dynamic data
Data acquired from an object dynamically changing with time in activity
distribution, or in morphology (shape), or in both is referred to as dynamic data.
An example of the first case would be a study looking at temporal changes in
activity uptake in individual organs or tissues, so-called timeactivity curves. An
example of the second case would be a gated cardiac study providing information
about changes of the heart morphology during the heart beat cycle (such as
changes of the heart wall thickness or movements of the heart structures).
The dynamic data can be viewed as an expansion of static (3D) data
by the temporal information into 4D (or 5D) data. The dynamic data are
usually subdivided (spread) into a set of temporal (time) frames. In the
first application, each time frame represents data acquired within a certain
483
CHAPTER 13
sequential time subinterval of the total acquisition time. The subintervals can
be uniform, or non-uniform with their durations adjusted, for example, to the
speed of the change of the activity curves. In the second application, each time
frame represents the total counts acquired within a certain stage (gate) of the
periodic organ movement (e.g. gated based on the electrocardiogram signal).
In the following, issues of the reconstruction of dynamic data in general are
addressed. Problems related to the motion and its corrections are discussed in
Section 13.3.6.4.
Once the data are subdivided (during acquisition) or sorted (acquired
list-mode data) into the set of time frames, seemingly the most natural way of
reconstructing them is to do it for each time frame separately. It should be noted
that this is the only available option for the analytical reconstruction approaches,
while the iterative reconstruction techniques can also reconstruct the dynamic
data directly in 4D (or 5D). A problem with frame by frame reconstruction is
that data in the individual time frames are quite noisy, since each time frame
only has a fraction of the total acquired counts, leading to noisy reconstructions.
Consequently, the resulting reconstructions often have to be filtered in the spatial
and/or temporal directions to obtain images of any practical value. Temporal
filtering takes into account time correlations between the signal components in
the neighbouring time frames, while the noise is considered to be independent.
Filtering, however, leads to resolution versus noise trade-offs.
On the other hand, reconstructing the whole 4D (or 5D) dataset together,
while using this correlation information in the (4D) reconstruction process via
proper temporal (resolution) kernels or basis functions, can considerably improve
those trade-offs as reported in the literature (similarly to the case of spatial
resolution modelling). The temporal kernels (basis functions) can be uniform in
shape and distribution, or can have a non-uniform shape (e.g. taking into account
the expected or actual shape of the timeactivity curves) and can be distributed
on a non-uniform grid (e.g. reflecting count levels at individual frames or image
locations). The kernel shapes and distributions can be defined, or determined,
beforehand and be fixed during the reconstruction. During the reconstruction
process, just the amplitudes of the basis functions are reconstructed. The
algorithms derived in the previous subsections basically stay the same, where the
temporal kernels can be considered as part of the system matrix A (comparable
to including the TOF kernel in TOF PET). Another approach, more accurate but
mathematically and computationally much more involved, is to iteratively build
up the shape (and distribution) of the temporal kernels during the reconstruction
in conjunction with the reconstruction of the emission activity (that is, the
amplitude of the basis functions).
While iterative methods lead to a clear quality improvement when
reconstructing dynamic data, thanks to the more accurate models of the signal and
484
IMAGE RECONSTRUCTION
data noise components, for the quantitative dynamic studies their shortcoming
is their non-linear behaviour, especially if they are not fully converged. For
example, the local bias levels can vary across the time frames as the counts, local
activity levels and object morphology change, which can lead to less accurate
timeactivity curves. On the other hand, analytical techniques which are linear
and consequently do not depend on the count levels and local activity, might
provide a more consistent (accurate) behaviour across the time frames in the
mean (less bias of the mean), but much less consistent (less precise) behaviour in
the variance due to the largely increased noise. It is still an open issue which of
the two approaches provides more clinically useful results, and the discussions
and research on this topic are still open and ongoing.
13.3.4. Acceleration
13.3.4.1. Ordered-subsets expectation-maximization
The MLEM algorithm requires a projection and a back projection in
every iteration, which are operations involving a large number of computations.
Typically, MLEM needs several tens to hundreds of iterations for good
convergence. Consequently, MLEM reconstruction is slow and many researchers
have studied methods to accelerate convergence.
The method most widely used is ordered-subsets expectation-maximization
(OSEM) [13.26]. The MLEM algorithm (Eq.(13.67)) is rewritten here for
convenience:
y i(k ) =
(13.69)
(k )
ij j + bi
j(k +1) =
j(k )
A
A
ij
ij
yi
y i(k )
(13.70)
where k is the iteration number and (1) is typically set to a uniform, strictly
positive image.
In OSEM, the set of all projections {1 ... I} is divided into a series of subsets
St, t=1T. Usually, these subsets are exhaustive and non-overlapping, i.e. every
projection element i belongs to exactly one subset St. In SPECT and PET, the data
y are usually organized as a set of (parallel- or fan-beam) projections, indexed
by projection angle . Therefore, the easiest way to produce subsets of y is by
assigning all of the data for each projection angle to exactly one of the subsets.
485
CHAPTER 13
However, if the data y are stored in list-mode (seeSection 13.3.2), the easiest way
is to simply cut the list into blocks, assigning each block to a different subset.
The OSEM algorithm can then be written as:
initialize jold , j=1,J
for k=1,K
for t=1,T
y i =
old
ij j
+ bi ,
i St
for j=1,J
old
jnew = j
A
A
iS t
ij
ij iS t
yi
(13.71)
y i
If all of the projections are combined into a single subset, the OSEM algorithm is
identical to the MLEM algorithm. Otherwise, a single OSEM iteration k consists
of T sub-iterations, where each sub-iteration is similar to an MLEM iteration,
except that the projection and back projection are only done for the projections of
the subset St. If every sinogram pixel i is in exactly one subset, the computational
burden of a single OSEM iteration is similar to that of an MLEM iteration.
However, MLEM would update the image only once, while OSEM updates it T
times. Experience shows that this improves convergence by a factor of about T,
which is very significant.
Convergence is only guaranteed for consistent data and provided that there
is subset balance, which requires:
A =A
ij
iS t
ij
(13.72)
iS u
IMAGE RECONSTRUCTION
MLEM iterations
FIG.13.14. A simulation comparing a single ordered-subsets expectation-maximization
(OSEM) iteration with 40 subsets, to 40 maximum-likelihood expectation-maximization
(MLEM) iterations. The computation time of the MLEM reconstruction is about 40 times
longer than that of OSEM. In this example, there were only two (parallel-beam) projection
angles per subset, which is clearly visible in the first OSEM iteration.
CHAPTER 13
additive way. Then, a relaxation factor is inserted to scale the update term to
obtain RAMLA (row-action maximum-likelihood algorithm [13.27]):
jnew = jold + jold
A y
ij
iS t
i
i
with
<
max t (
A )
ij
(13.73)
iS t
488
IMAGE RECONSTRUCTION
True image
Sinogram
Smoothed
true image
With
noise
Smoothed
CHAPTER 13
modest amount of smoothing strongly suppresses the noise at the cost of a mild
loss of resolution. This is illustrated in the third row of Fig.13.15.
If the MLEM implementation takes into account the (possibly position
dependent) spatial resolution effects, then the resolution should improve with
every MLEM iteration. After many iterations, the spatial resolution should be
rather good, similar or even better than the sinogram resolution, but the noise
will have propagated dramatically. It is assumed that the obtained spatial
resolution corresponds to a position dependent point spread function which can
be approximated as a Gaussian with a full width at half maximum (FWHM)
of FML(x, y). Assume further that this image is post-smoothed with a (position
independent) Gaussian convolution kernel with an FWHM of Fp. The local
point spread function in the smoothed image will then have an FWHM of
(F ML ( x, y)) 2 Fp2 . If enough iterations are applied and if the post-smoothing
kernel is sufficiently wide, the following relation holds Fp Fml(x, y) and,
therefore, (F ML ( x, y)) 2 + Fp2 Fp . Under these conditions, the post-smoothed
MLEM image has a nearly position independent and predictable spatial
resolution. Thus, if PET or SPECT images are acquired for quantification, it is
recommended to use many iterations and post-smoothing, rather than a reduced
number of iterations, for noise suppression.
13.3.5.3. Smoothing basis functions
An alternative approach to counter noise propagation is to use an image
representation that does not accomodate noisy images. Instead of representing the
image with a grid of non-overlapping pixels, a grid of smooth, overlapping basis
functions can be used. The two mostly used approaches are the use of spherical
basis functions or blobs [13.28] and the use of Gaussian basis functions or
sieves [13.29].
In the first approach, the projector and back projector operators are
typically adapted to work directly with line integrals of the basis functions. In
the sieves approach, the projection of a Gaussian blob is usually modelled as the
combination of a Gaussian convolution and projection along lines. The former
approach produces a better approximation of the mathematics, while the latter
approach yields a faster implementation.
The blobs or sieves are probably most effective when their width is very
similar to the spatial resolution of the tomographic system. In this setting, the basis
function allows accurate representation of the data measured by the tomographic
system, and prevents reconstruction of much of the (high frequency) noise. It has
been shown that using the blob during reconstruction is more effective than using
the same blob only as a post-smoothing filter. The reason is that the post-filter
490
IMAGE RECONSTRUCTION
always reduces the spatial resolution, while a sufficiently small blob does not
smooth data if it is used during reconstruction.
If the blob or sieve is wider than the spatial resolution of the tomographic
system, then its use during reconstruction produces Gibbs over- and undershoots,
also known as ringing. This effect always arises when steep edges have to be
represented with a limited frequency range, and is related to the ringing effects
observed with very sharp low-pass filters. For some imaging tasks, these ringing
artefacts are a disadvantage.
13.3.5.4. Maximum a posteriori or penalized likelihood
Smoothing the MLEM image is not a very elegant approach: first, the
likelihood is maximized, and then it is decreased again by smoothing the image.
It seems more elegant to modify the objective function, such that the image that
maximizes it does not need further processing. This can be done with a Bayesian
approach, which is equivalent to combining the likelihood with a penalty
function.
It is assumed that a good reconstruction image will be obtained if that
image maximizes the (logarithm of the) probability p(|y) given by Eq.(13.39)
and repeated here for convenience:
= arg max(ln p( y | ) + ln p( )) (13.74)
The second term represents the a priori knowledge about the tracer distribution,
and it can be used to express our belief that the true tracer distribution is fairly
smooth. This is usually done with a Markov prior. In a Markov prior, the a priori
probability for a particular voxel, given the value of all other voxels, is only a
function of the direct neighbours of that voxel:
p( j | k , k j ) = p( j | k , k N j ) (13.75)
ln p(
j
where
| k , k N j ) =
E(
j k ) (13.76)
k N j
CHAPTER 13
Huber
Geman
FIG.13.16. The energy function of the quadratic prior, the Huber prior and the Geman prior.
Original
MLEM
Quadratic
Huber
Geman
IMAGE RECONSTRUCTION
functions yield a concave prior: it has a single maximum. In contrast, the Geman
prior is not concave (seeFig.13.16) and has local maximums. Such concave
priors require careful initialization, because the final reconstruction depends on
the initial image and on the behaviour of the optimization algorithm.
Figure13.18 shows that MAP reconstructions produce position dependent
spatial resolution, similar to MLEM with a reduced number of iterations. The
reason is that the prior is applied with a uniform weight, whereas the likelihood
provides more information about some voxels than about others. As a result,
the prior produces more smoothing in regions where the likelihood is weaker,
e.g. regions that have contributed only a few photons to the measurement due to
high attenuation.
MLEM
Smoothed MLEM
The prior can be made position dependent as well, to ensure that the balance
between the likelihood and the prior is about the same in the entire image. In
that case, MAP with a quadratic prior produces images which are very similar
to MLEM images with post-smoothing: if the prior and smoothing are tuned to
produce the same spatial resolution, then both algorithms also produce nearly
identical noise characteristics.
493
CHAPTER 13
Many papers have been devoted to the development of algorithms for MAP
reconstruction. A popular algorithm is the so-called one step late algorithm.
Inserting the derivative of the prior P in Eq.(13.66) yields:
(L x ( ) + P( ))
=
j
y
i
y
i
(k )
i
Aij j(k )
P( )
Aij +
= 0 (13.77)
j
j
j(k +1)
j(k )
P( )
Aij
j
ij
yi
y i(k )
(13.78)
(k )
j(k )
ij
Aij
yi
(13.79)
y i(k )
= (j k ) +
(j k ) L ML ( )
j
Aij
(13.80)
(k )
(LML ( ) + P( ))
j
(13.81)
(k )
494
IMAGE RECONSTRUCTION
CHAPTER 13
ill-posed (the contamination data are typically quite noisy) and computationally
exceedingly expensive, and, thus, not feasible for routine clinical use.
The more practical, and commonly used, approach is to include correction
effects as multiplicative factors and additive terms within the forward projection
model of the iterative reconstruction approaches:
y=A + b
(13.82)
where the effects directly influencing the direct (true) data are included inside the
system matrix A and will be discussed in the following, while the additive terms
b (including scatter and randoms) will be discussed separately in Section 13.3.6.2
on additive terms.
13.3.6.1. Factors affecting direct events multiplicative effects
In the PET case, the sequence of the physical effects (described in previous
chapters) that occur as the true coincident events are generated and detected can
be described by the following factorization of the system matrix A as discussed
in detail in Ref. [13.30]:
A = Adet.sens Adet.blur Aatt Ageom Atof Apositron (13.83)
where
Apositron models the positron range;
Atof models the timing accuracy for the TOF PET systems (TOF resolution
effects, as discussed in Section 13.3.3.3);
Ageom
is the geometric projection matrix, the core of the system matrix,
which is a geometrical mapping between the source (voxel j) and data
(projection bin i, defined by the LOR, or its time bin in the TOF case);
the geometrical mapping is based on the probability (in the absence of
attenuation) that photon pairs emitted from an individual image location
(voxel) reach the front faces of a given crystal pair (LOR);
Aatt is a diagonal matrix containing attenuation factors on individual LORs;
Adet.blur
models the accuracy of reporting the true LOR positions (detector
resolution effects; discussed in Section 13.3.6.2);
and Adet.sens is a diagonal matrix modelling the probability that an event will
be reported once the photon pair reaches the detector surface a unique
multiplicative factor for each detector crystal pair (LOR) modelled by
496
IMAGE RECONSTRUCTION
normalization coefficients, but can also include the detector axial extent and
detector gaps.
In practice, the attenuation operation Aatt is usually moved to the left (to
be performed after the blurring operation). This is strictly correct only if the
attenuation factors change slowly, i.e. they do not change within the range
of detector resolution kernels. However, even if this is not the case, a good
approximation can be obtained by using blurred (with the detector resolution
kernels) attenuation coefficients. In this case, the multiplicative factors Adet.sens
and Aatt can be removed from the system matrix A and applied only after the
forward projection operation as a simple multiplication operation (for each
projection bin). The rest of the system matrix (except Apositron, which is object
dependent) can now be pre-computed, whether in a combined or a factorized
form, since it is now independent of the reconstructed object. On the other hand,
the attenuation factors Aatt (and Apositron, if considered) have to be calculated for
each given object.
In the SPECT case, the physical effects affecting the true events can be
categorized and factorized into the following sequence:
A = Adet.sens Adet.blur Ageom,att (13.84)
where
Adet.sens includes multiplicative factors (such as detector efficiency and decay
time);
Adet.blur represents the resolution effects within the gamma camera (the intrinsic
resolution of the system);
and Ageom,att is the geometric projection matrix, also including the collimator
effects (such as the depth dependent resolution) and the depth and view dependent
attenuation factors.
For gamma cameras, the energy and linearity corrections are usually
performed in real time, and the remaining (detector efficiency) normalization
factors are usually very close to one and can be, for all practical purposes, ignored
or pre-corrected. Similarly, the theory says that the decay correction should be
performed during the reconstruction, because it is different for each projection
angle. However, for most tracers, the decay during the scan is very modest,
and in practice it is usually either ignored or done as a pre-correction. The
attenuation component is object dependent and needs to be recalculated for each
reconstructed object. Furthermore, its calculation is much more computationally
497
CHAPTER 13
expensive than in the PET case, since it involves separate calculations of the
attenuation factors for each voxel and for each view. This is one of the reasons
why the attenuation factors have often been ignored in SPECT. More details on
the inclusion of the resolution effects into the system matrix are discussed in
Section 13.3.6.3.
13.3.6.2. Additive contributions
The main additive contaminations are scatter (SPECT and PET) and
random events (PET). The simplest possibility of dealing with them is to
subtract their estimates ( s and r ) from the acquired data. While this is a valid
(and necessary) pre-correction step for the analytical reconstructions, it is not
recommended for statistical approaches since it changes the statistical properties
of the data, causing them to lose their Poisson character. As the maximumlikelihood algorithm is designed for Poisson distributed data, its performance is
suboptimal if the data noise is different from Poisson. Furthermore, subtraction of
the estimated additive terms from the noisy acquired data can introduce negative
values into the pre-corrected data, especially for low count studies. The negative
values have to be truncated before the maximum-likelihood reconstruction, since
it is not able to correctly handle the negative data. This truncation, however, leads
to a bias in the reconstruction.
On the other end of the spectrum of possibilities, would be considering
the scatter and randoms directly in the (full) system model, that is, including a
complete physical model of the scatter and random components into a Monte
Carlo calculation of the forward projection. However, this approach is exceedingly
computationally expensive and is not feasible for practical use. A practical and
the most common approach for dealing with the additive contaminations is to add
their estimate (b = s + r ) to the forward projection in the matrix model of the
iterative reconstruction, i.e. the forward model is given by A + b , as considered
in the derivation of the MLEM reconstruction (Eq.(13.67)).
Special treatment has to be considered for clinical scanners in which the
random events (r, estimated by delayed coincidences) are on-line subtracted
from the acquired data (y, events in the coincidence window prompts).
The most important characteristic of the Poisson data is that their mean equals
their variance: mean(yi)=var(yi). However, after the subtraction of the delays
from the prompts (both being Poisson variables), the resulting data () are
not Poisson anymore, since mean(i) = mean(yi ri) = mean(yi) mean(ri),
while var(i) = var(yi ri) = var(yi) + var(ri). To regain the main characteristic
of the Poisson data (at least of the first two moments), the shifted Poisson
approach can be used, utilizing the fact that adding a (noiseless) constant
value to the Poisson variable changes the mean but preserves the variance
498
IMAGE RECONSTRUCTION
j(k )
i + 2 ri
A
A A
ij
ij
ij
(k )
j + si
+ 2 ri
(13.85)
It is worthwhile mentioning here that even in the shifted Poisson case, the
negative values in the subtracted data and consequent truncation leading to the
bias and artefacts cannot be completely avoided. However, the chance of the
negative values decreases since the truncation of the negative values is being
performed on the value-shifted data ( i + 2ri ) . Examples of reconstructions
from data with a subtracted additive term, using the regular MLEM algorithm
and using MLEM with the shifted Poisson model, are shown in Fig.13.19. As the
counts were relatively high in this simulation, the subtraction did not produce
y
Original
Original +
contaminator
yr
MLEM of (y r)
MLEM of
(y r + 2r, 2r)
499
CHAPTER 13
IMAGE RECONSTRUCTION
for the given scanner). The positron range depends on the particular attenuation
structures in which the the positrons annihilate, and also varies from isotope
to isotope. Furthermore, the shape of the probability function (kernel) of the
positron annihilation abruptly changes at the boundaries of two tissues, such as
at the boundary of the lungs and surrounding soft tissues, and, thus, it strongly
depends on the particular objects morphology and is quite challenging to model
accurately. In general, the positron range has a small effect (compared to the other
effects) for clinical scanners, particularly for studies using 18F-labelled tracers,
and can often be ignored. However, for small animal imaging and for other tracers
(such as 82Rb), the positron range becomes an important effect to be considered.
Simulated SPECT data
No
Poisson noise
MLEM
MLEM
CHAPTER 13
at one or more representative locations within the given scanner. This approach
typically provides satisfactory results within the central FOV of large, whole
body PET scanners. However, for PET systems with smaller ring diameters
(relative to the reconstruction FOV), such as animal systems, and for SPECT
systems with depth dependent resolution (and in particular with non-circular
orbits), it is desirable to use more accurate spatially variant resolution models.
The second category is using analytically calculated resolution functions
(usually spatially variant anisotropic kernels) for each location (LOR) as
determined based on analytical models of physical effects affecting the resolution.
This approach is usually limited to simple analytical models representing (or
approximating) only basic physical characteristics of the system. The resolution
kernels are usually calculated in real time during the reconstruction process when
they are needed within the forward and back projection calculations. In SPECT,
distance dependent collimator blurring requires convolution kernels that become
wider and, therefore, need more computation, with increasing distance to the
collimator. The computation time can be reduced considerably by integrating
an incremental blurring step into the projector (and back projector), based on
Gaussian diffusion. This method, developed by McCarthy and Miller in 1991, is
described in more detail in chapter22 of Ref.[13.5].
A more accurate but computationally very demanding approach is using
Monte Carlo simulations of the resolution functions based on a set of point sources
at various (ideally all) image locations. Setting up an accurate mathematical
model (transport equations tracing the photon paths through the detector system/
crystals) is relatively easy within the Monte Carlo simulations, compared to the
analytical approach of determining the resolution function. However, to obtain
sufficient statistics to get the desired accuracy of the shape of the resolution
functions is extremely time consuming. Consequently, simplifications often have
to be made in practice, such as determining the resolution kernels only at a set of
representative locations and interpolating/extrapolating from them the resolution
kernels at other locations.
The most accurate but also most involved approach is based on
experimental measurements of the system response by measuring physical point
sources at a set of image locations within the scanner. This is a tedious and very
time consuming process, involving point sources with long half-life isotopes and
usually requiring the use of accurate robotic stages to move the point source.
Among the biggest challenges is to accumulate a sufficient number of counts to
obtain an accurate point spread function, even at a limited number of locations.
Consequently, the actual resolution kernels used in the reconstruction model are
often estimated by fitting analytical functions (kernels) to the measured data,
rather than directly using the measured point spread functions.
502
IMAGE RECONSTRUCTION
CHAPTER 13
hand, the transmission scan (CT) is relatively short and can usually be done in
a breath-hold mode. Consequently, the attenuation image is usually motion-free
and captures only one particular patient position and organ configuration
(time frame). If the attenuation factors obtained from this fixed-time position
attenuation image are applied to the emission data acquired at different time
frames (or averaged over many time frames), this leads to artefacts in the
reconstructed images, which tend to be far more severe in PET than in SPECT.
This is, for example, most extremely pronounced at the bottom of the lungs
which can typically move several centimetres during the breathing cycle, causing
motion between two regions with very different attenuation coefficients.
Emission data motion: Correction approaches for motion during the
emission scan are discussed first. The first step is subdividing the data (in PET,
typically list-mode data) into a sufficient number of time frames to ensure that the
motion within each frame is small. For the organ movement, the frames can be
distributed over a period of the organ motion (e.g. breathing cycle). For the patient
motion, the frames would be typically longer and distributed throughout the
scan time. Knowledge about the motion can be obtained using external devices,
such as cameras with fiducial markers, expansion belts or breathing sensors for
respiratory motion, the electrocardiogram signal for cardiac motion, etc. There
are also a limited number of approaches for estimating the motion directly from
the data.
Once the data are subdivided into the set of the frames, the most
straightforward approach is to reconstruct data independently in each frame.
The problem with this approach is that the resulting images have a poor signal
to noise ratio because the acquired counts have been distributed into a number
of individual (now low count) frames. To improve the signal to noise ratio, the
reconstructed images for individual frames can be combined (averaged) after
they are registered (and properly deformed) to the reference time frame image.
However, for statistical non-linear iterative reconstruction algorithms, this is not
equivalent to (and typically of a lower quality than) the more elaborate motion
correction approaches, taking into account all of the acquired counts in a single
reconstruction, as discussed below.
For rigid motion (e.g. in brain imaging), the events on LORs (LORi) from
each time frame, or time position, can be corrected for motion by translation
(using affine transformations) into the new LORs (LORi) in the reference frame
(seeFig.13.21 (top right, solid line)), in which the events would be detected
if there were no motion. Reconstruction is then done in a single reference
frame using all acquired counts, leading to a better signal to noise ratio in the
reconstructed images. Care has to be taken with the detector normalization factors
so that the events are normalized using the proper factors (Ni) for the LORs on
which they were actually detected (and not into which they were translated).
504
IMAGE RECONSTRUCTION
Attenuation factors are obtained on the transformed lines (atti) through the
attenuation image in the reference frame. Care also has to be given to the proper
treatment of data LORs with events being translated into, or out of, the detector
gaps or detector ends. This is important, in particular for the calculation of the
sensitivity matrix, which then becomes a very time consuming process.
Frame k
Frame 0
Image estimate
Image estimate
Scanned object
Image estimate
Uncorrected image
Image estimate
FIG.
13.21. Illustration
of motion
corrections
for events
acquired
within line
of response
LORi with
FIG.13.21.
Illustration
of motion
corrections
for events
acquired
within
line of response
corresponding normalization Ni and attenuation atti factors. Left top: positions and shapes of the object in
LORi with corresponding normalization Ni and attenuation atti factors. Left top: positions
the reference time frame 0 and frame k. Left bottom: illustration of blurring in the reconstruction
and shapes
of the
in thewithout
reference
time
frame 0(attenuation
and frame factors
k. Left are
bottom:
illustration
combining
events
fromobject
all frames
motion
correction
also averaged
over
ofwhole
blurring
in of
thethereconstruction
events
from all within
framesthe
without
motion
Middle column:
processing
reference
time correction
frame. Right
the
range
frames atti0k).combining
line) of
hasthe
to be
transformed
the LORi
top:
LOR based factors
motion correction
frame k over
thethe
LORwhole
i (dashed
(attenuation
are also for
averaged
range
frames
atti0k).to Middle
(solid
line
for
rigid
motion,
dotted
line
for
non-rigid
motion)
which
represents
the
paths
that
the
photons
column: processing within the reference time frame. Right top: LOR based motion correction
would travel through the reference object if there were no motion. It should be noted that although the
has to
transformed
to the
LOR
line for
for frame
k the LOR
i (dashed line)
i (solid
LORs
are transformed,
the normalization
factors
arebeused
for the crystal
pairs
(LORs)
in which
the rigid
events
motion,
dotted
forthe
non-rigid
motion) factors
which are
represents
the paths that
while
used attenuation
for the transformed
pathsthe
(attphotons
bottom:
were
detected
(Ni),line
i). Right would
image
based
motion
the estimated
image
thethat
reference
frame
travel
through
thecorrection,
referenceincluding
object ifimage
theremorphing
were noofmotion.
It should
befrom
noted
although
(dashed
lines)are
intotransformed,
the given frame
line).
the LORs
the(solid
normalization
factors are used for the crystal pairs (LORs) in
while
attenuation
factors
are for the approach
transformed
which
were
detected into
(Ni), the
Oncethe
theevents
data are
subdivided
set the
of used
the frames,
the most
straightforward
is to
bottom: in
image
based The
motion
correction,
morphing
the
paths (attdata
reconstruct
independently
each frame.
problem
with thisincluding
approach image
is that the
resultingofimages
i). Right
have
a poor image
signal from
to noise
because
the (dashed
acquired lines)
countsinto
have
distributed
into line).
a number of
estimated
the ratio
reference
frame
thebeen
given
frame (solid
individual (now low count) frames. To improve the signal to noise ratio, the reconstructed images for
individual frames can be combined (averaged) after they are registered (and properly deformed) to the
referenceFor
time non-rigid
frame image.(elastic)
However,motion,
for statistical
non-linear
reconstruction
this is
which
is theiterative
case for
most of algorithms,
the practical
not equivalent to (and typically of a lower quality than) the more elaborate motion correction approaches,
applications,
the
motion
correction
procedures
become
quite
involved.
There
taking into account all of the acquired counts in a single reconstruction, as discussed below.
areFortwo
The first
approach
is to
derive
eachtransformations
time frame, or time
rigidbasic
motionpossibilities.
(e.g. in brain imaging),
the events
on LORs
(LOR
i) fromthe
position,
can be corrected
for events
motion by
translation
(using
affine
transformations)
into the new
LORs
of individual
paths of
(LORs)
from
each
frame
into the reference
frame
(LORi) in the reference frame (see Fig. 13.21 (top right, solid line)), in which the events would be detected
(topReconstruction
right, dotted
line)).
non-rigid
the
if (seeFig.13.21
there were no motion.
is then
doneFor
in athe
single
reference motion,
frame using
alltransformed
acquired counts,
leading
a better signal
noise ratio in
the reconstructed
Care has tolines
be taken
with the detector
pathstothrough
the toreference
object
frame areimages.
not straight
anymore,
thus
normalization factors so that the events are normalized using the proper factors (Ni) for the LORs on which
they were actually detected (and not into which they were translated). Attenuation factors are obtained on
the transformed lines (atti) through the attenuation image in the reference frame. Care also has to be given
505or
to the proper treatment of data LORs with events being translated into, or out of, the detector gaps
detector ends. This is important, in particular for the calculation of the sensitivity matrix, which then
becomes a very time consuming process.
For non-rigid (elastic) motion, which is the case for most of the practical applications, the motion
correction procedures become quite involved. There are two basic possibilities. The first approach is to
derive the transformations of individual paths of events (LORs) from each frame into the reference frame
CHAPTER 13
leading to very large computational demands for the calculations of the forward
and back projection operations. The same care for normalization, gaps and
detector ends has to be taken as above.
The second, more efficient, approach involves morphing the image
estimate (of the reference image) into the frame for which current events (LORs)
are being processed (seeFig.13.21 (bottom right, solid line)). It should be noted
that some pre-sorting of the data is considered, so that events from each frame
are processed together (using a common image morphing operation). Here, the
acquired LORs (LORi) and their normalization coefficients (Ni) are directly used
without modification. However, the sensitivity matrix still needs to be carefully
calculated, taking into consideration update and subset strategy, e.g. including
the morphing operation if subset data involve several frames. This is, however,
a simpler operation than in the LOR based case since the morphing is done in
the image domain. This image based approach is not only more efficient, but
also better reflects/models the actual data acquisition process during which the
acquired object is being changed (morphed).
Attenuation effects: In the following, it is considered that either attenuation
information for each time frame is available, for example, having a sequence of
CT scans for different time positions, or there is knowledge of the motion and
tools to morph a fixed-time position CT image to represent attenuation images at
individual time frames. It is further considered that tools are available to obtain
the motion transformation of data and/or images between the individual time
frames. If the emission data are stored or binned without any motion gating, they
represent motion-blurred emission information over the duration of the scan.
Using attenuation information for them for a fixed time position is not correct. It
would be better to pre-correct those data using proper attenuation factors for each
frame, but then the statistical properties (Poisson character) are lost due to the
pre-correction. A good compromise (although not theoretically exact) is to use
motion-blurred attenuation factors during the pre-correction or the reconstruction
process.
For data stored in multiple time frames, separate attenuation factors (or
their estimates) are used for each frame, such that they reflect attenuation factors
(for each LOR) at that particular time frame. For the case when there are multiple
CT images, this is simply obtained by calculation (forward projection) of the
attenuation coefficients for each frame from the representative CT image for that
frame. For the case when there is only one CT image, attenuation factors have
to be calculated on the modified LORs (for each time frame) in the LOR based
corrections, or to morph the attenuation image for each frame and then calculate
the attenuation factors from the morphed images in the image based corrections.
506
IMAGE RECONSTRUCTION
x
0
where h(s) is the convolution kernel, combining the inverse Fourier transform of
the ramp filter and a possible low-pass filter to suppress the noise.
The variance on the measured sinogram Y(s, ) data equals its expectation
Y (s, ); the covariance between two different sinogram values Y(s, ) and
Y(s, ) is zero. Consequently, the covariance between two reconstructed pixel
values (x,y) and (x, y) equals:
covar( ( x, y), ( x ', y')) =
x
0
Y( x cos + y sin s) ds
(13.87)
This integral is non-zero for almost all pairs of pixels. As h(s) is a high-pass
filter, neighbouring reconstruction pixels tend to have fairly strong negative
correlations. The correlation decreases with increasing distance between (x,y)
and (x, y). The variance is obtained by setting x=x and y=y, which produces:
var( ( x, y)) =
x
0
507
CHAPTER 13
True
With noise
Variance
(400 noise
realizations)
IMAGE RECONSTRUCTION
This matrix gives the covariances between all possible pixel pairs in the
image produced by WLS reconstruction. The projection A and back projection
A have a low pass characteristic. Consequently, the inverse (ACy1A)1 acts as a
high-pass filter. It follows that neighbouring pixels of WLS reconstructions tend
to have strong negative correlations, as is the case with FBP. Owing to this, the
MLEM variance decreases rapidly with smoothing.
Figure13.22 shows mean and noisy reconstructions and variance images of
MLEM with Gaussian post-smoothing and MAP with a quadratic prior. For these
reconstructions, 16 iterations with 8 subsets were applied. MAP with a quadratic
prior produces fairly uniform variance, but with a position dependent resolution.
In contrast, post-smoothed MLEM produces fairly uniform spatial resolution, in
combination with a non-uniform variance.
REFERENCES
[13.1] LEWITT, R.M., MATEJ, S., Overview of methods for image reconstruction from
projections in emission computed tomography, Proc. IEEE Inst. Electr. Electron.
Eng. 91 (2003) 15881611.
[13.2] NATTERER, F., The Mathematics of Computerized Tomography, Society for
Industrial and Applied Mathematics (SIAM), Philadelphia, PA (1986).
[13.3] KAK, A.C., SLANEY, M., Principles of Computerized Tomographic Imaging, Society
for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (1988).
[13.4] BARRETT, H.H., MYERS, K.J., Foundations of Image Science, John Wiley and
Sons, Hoboken, NJ (2004).
[13.5] WERNICK, M.N., AARSVOLD, J.N. (Eds), Emission Tomography, The
Fundamentals of PET and SPECT, Elsevier Academic Press (2004).
[13.6] NATTERER, F., Inversion of the attenuated radon transform, Inverse Probl. 17 (2001)
113119.
[13.7] XIA, W., LEWITT, R.M., EDHOLM, P.R., Fourier correction for spatially variant
collimator blurring in SPECT, IEEE Trans. Med. Imaging 14 (1995) 100115.
[13.8] DEFRISE, M., CLACK, R., TOWNSEND, D.W., Image reconstruction from
truncated, two-dimensional, parallel projections, Inverse Probl. 11 (1996) 287313.
509
CHAPTER 13
[13.9] DEFRISE, M., KUIJK, S., DECONINCK, F., A new three-dimensional reconstruction
method for positron cameras using plane detectors, Phys. Med. Biol. 33 (1988) 4351.
[13.10] KINAHAN, P.E., ROGERS, J.G., Analytic three-dimensional image reconstruction
using all detected events, IEEE Trans. Nucl. Sci. NS-36 (1990) 964968.
[13.11] DAUBE-WITHERSPOON, M.E., MUEHLLEHNER, G., Treatment of axial data in
three-dimensional PET, J. Nucl. Med. 28 (1987) 17171724.
[13.12] LEWITT, R.M., MUEHLLEHNER, G., KARP, J.S., Three-dimensional image
reconstruction for PET by multi-slice rebinning and axial image filtering, Phys. Med.
Biol. 39 (1994) 321339.
[13.13] DEFRISE, M., et al., Exact and approximate rebinning algorithms for 3D PET data,
IEEE Trans. Med. Imaging 16 (1997) 145158.
[13.14] DEFRISE, M., A factorization method for the 3D X-ray transform, Inverse Probl. 11
(1995) 983994.
[13.15] TOMITANI, T., Image reconstruction and noise evaluation in photon time-offlight assisted positron emission tomography, IEEE Trans. Nucl. Sci. NS-28 (1981)
45824589.
[13.16] QI, J., LEAHY, R.M., Iterative reconstruction techniques in emission computed
tomography, Phys. Med. Biol. 51 (2006) R541578.
[13.17] FESSLER, J.A., BOOTH, S.D., Conjugate-gradient preconditioning methods for shift
variant PET image reconstruction, IEEE Trans. Image Process. 8 (1999) 688699.
[13.18] PRESS, W.H., FLANNERY, B.P., TEUKOLSKY, S.A., VETTERLING, W.T.,
Numerical Recipes, The Art of Scientific Computing, Cambridge University Press
(1986).
[13.19] DE PIERRO, A.R., A modified expectation maximization algorithm for penalized
likelihood estimation in emission tomography, IEEE Trans. Med. Imaging 14 (1995)
132137.
[13.20] SHEPP, L.S., VARDI, Y., Maximum likelihood reconstruction for emission
tomography, IEEE Trans. Med. Imaging MI-1 (1982) 113122.
[13.21] DEMPSTER, A.P., LAIRD, N.M., RUBIN, D.B., Maximum likelihood from
incomplete data via the EM algorithm, J. R. Stat. Soc. Series B Stat. Methodol. 39
(1977) 138.
[13.22] PARRA, L., BARRETT, H.H., List-mode likelihood: EM algorithm and image
quality estimation demonstrated on 2D PET, IEEE Trans. Med. Imaging 17 2 (1998)
228235.
[13.23] READER, A.J., ERLANDSSON, K., FLOWER, M.A., OTT, R.J., Fast accurate
iterative reconstruction for low-statistics positron volume imaging, Phys. Med. Biol.
43 4 (1998) 835846.
[13.24] QI, J., Calculation of the sensitivity image in list-mode reconstruction, IEEE Trans.
Nucl. Sci. 53 (2006) 27462751.
510
IMAGE RECONSTRUCTION
[13.25] MATEJ, S., et al., Efficient 3D TOF PET reconstruction using view-grouped histo
images: DIRECT Direct Image Reconstruction for TOF, IEEE Trans. Med. Imaging
28 (2009) 739751.
[13.26] HUDSON, M.H., LARKIN, R.S., Accelerated image reconstruction using ordered
subsets of projection data, IEEE Trans. Med. Imaging 13 (1994) 601609.
[13.27] BROWNE, J., DE PIERRO, A.R., A row-action alternative to the EM algorithm
for maximizing likelihoods in emission tomography, IEEE Trans. Med. Imaging 15
(1996) 687699.
[13.28] DAUBE-WITHERSPOON, M.E., MATEJ, S., KARP, J.S., LEWITT, R.M.,
Application of the row action maximum likelihood algorithm with spherical basis
functions to clinical PET imaging, IEEE Trans. Nucl. Sci. 48 (2001) 2430.
[13.29] SNYDER, D.L., MILLER, M.I., THOMAS, L.J., Jr., POLITTE, D.G., Noise and
edge artefacts in maximum-likelihood reconstructions for emission tomography, IEEE
Trans. Med. Imaging MI-6 (1987) 228238.
[13.30] LEAHY, R.M., QI, J., Statistical approaches in quantitative positron emission
tomography, Stat. Comput. 10 (2000) 147165.
511
CHAPTER 14
NUCLEAR MEDICINE IMAGE DISPLAY
H. BERGMANN
Center for Medical Physics and Biomedical Engineering,
Medical University of Vienna,
Vienna, Austria
14.1. INTRODUCTION
The final step in a medical imaging procedure is to display the image(s)
on a suitable display system where it is presented to the medical specialist
for diagnostic interpretation. The display of hard copy images on X ray film
or photographic film has largely been replaced today by soft copy image
display systems with cathode ray tube (CRT) or liquid crystal display (LCD)
monitors as the image rendering device. Soft copy display requires a high
quality display monitor and a certain amount of image processing to optimize
the image both with respect to the properties of the display device and to some
psychophysiological properties of the human visual system. A soft copy display
system, therefore, consists of a display workstation providing some basic image
processing functions and the display monitor as the intrinsic display device.
Display devices of lower quality may be used during intermediate steps of the
acquisition and analysis of a patient study. Display monitors with a quality
suitable for diagnostic reading by the specialist medical doctor are called primary
devices, also known as diagnostic devices. Monitors with lower quality but good
enough to be used for positioning, processing of studies, presentation of images
in the wards, etc. are referred to as secondary devices or clinical devices.
Nuclear medicine images can be adequately displayed even for diagnostic
purposes on secondary devices. However, the increasing use of X ray images on
which to report jointly with images from nuclear medicine studies, such as those
generated by dual modality imaging, notably by positron emission tomography
(PET)/computed tomography (CT) and single photon emission computed
tomography (SPECT)/CT, requires display devices capable of visualizing
high resolution grey scale images at diagnostic quality, i.e. primary display
devices. Both grey scale and colour display devices are used, the latter playing
an important role in the display of processed nuclear medicine images and in
the display of overlaid images such as from registered dual modality imaging
studies.
512
513
CHAPTER 14
imaging device and its noise characteristics. As a rule of thumb, the sampling
size used for a nuclear medicine image preserves the information content in the
image when it is approximately one third of the full width at half maximum
(FWHM) of the spatial resolution of the acquisition system. Thus, a scintillation
camera equipped with, for example, a high resolution collimator, a system spatial
resolution of 8mm FWHM and a field of view of 540mm400mm would
require a matrix size of at most 256256 pixels to preserve the information
content transmitted by the camera. Even a current commercial off the shelf
(COTS) display device, on the other hand, hasminimum pixel dimensions of
10241280. Displaying the original image matrix at native pixel resolution
would result in an image too small for visual interpretation. It is, therefore,
essential for the image to be magnified to occupy a reasonable sized part of the
available screen area. Straightforward magnification using a simple interpolation
such as the nearest neighbour interpolation scheme would result in a block
structure with clearly visible square elements which strongly interfere with the
intensity changes created by the true structure of the object generating artefacts
in the interpretation. Magnification is necessary and can be done without artefact
generation by a suitable interpolation algorithm that generates smooth transitions
between screen pixels while preserving the intensity variations within the original
image. It is the task of the display workstation to provide software for visualizing
this type of image.
14.2.2. Contrast resolution
This refers to the number of intensity levels which an observer can perceive
for a given display. It is referred to as perceived dynamic range (PDR).
Brightness refers to the emitted luminance on screen and is measured in
candelas per square metre (cd/m2). The maximum brightness of a monitor is
an important quality parameter. Specifications of medical display devices also
include the calibrated maximum brightness which is lower but is recommended
for primary devices to ensure that the maximum luminance can be kept constant
during the lifespan of the display device. Typical values for a primary device LCD
monitor are 700 and 500 cd/m2 for the maximum and the calibrated maximum
luminance, respectively.
The dynamic range of a display monitor is defined as the ratio between the
highest and the lowest luminance (brightness) a monitor is capable of displaying.
The dynamic range is highest if measured in the absence of ambient light. It is
then called contrast ratio (CR=LH/LL) and is the figure usually quoted by vendors
in the specifications. A typical CR of a grey scale primary LCD monitor is 700:1,
measured in a dark reading room. If luminance values are measured with ambient
light present, which is the scenario in practice, CR is replaced by the luminance
515
CHAPTER 14
ratio (LR=LH/LL), which is the ratio of the highest and the lowest luminance
values including the effect of ambient light. It can be considerably smaller than
the CR, since the effect of ambient lighting is added as a luminance Lamb to both
theminimum and the maximum luminances. The CR is related to the PDR, but
its potential usefulness as a predictor of monitor performance suffers from the
lack of standardized measurement procedures and the effect of ambient light. The
dark room CR is a common performance parameter quoted by manufacturers.
The PDR is the number of intensity levels an observer can actually
distinguish on a display. It can be estimated based on the concept of just
noticeable differences (JNDs). The JND is the luminance difference of a given
target under given viewing conditions that the average human observer can just
perceive. The measured JND depends strongly on the conditions under which
the experiment is performed, for example, on the size, shape and position of the
target. The PDR is defined as the number of JNDs across the dynamic range of
a display. The PDR for grey scale displays has been assessed in Ref.[14.4] to
be around a hundred. The number of intensity values which a pixel of the digital
image can hold is much higher. The pixel of an image matrix is usually 1 or
2 bytes deep. It is an integer number between 256 and 65536, and is given by
the pixel depth of the image matrix. It is a further task of the display system to
scale the original intensity values to a range compatible with the performance
of the human observer. It is common to use 256 intensity values to control the
brightness of a display device as this is sufficient to produce a sequence of
brightness levels perceived as continuous by the human observer.
Colour displays using pseudo-colour scales can extend the PDR to
about 1.5 times that of a grey scale display. This has been demonstrated for a
heated-object scale which has the additional advantage of producing a natural
image [14.4]. Owing to the enormous number of possible colour scales and the
fact that the majority of them produce unnatural images, the concept of JNDs,
while valid in principle, cannot be transferred directly to colour displays.
14.3. DISPLAY DEVICE HARDWARE
14.3.1. Display controller
The display controller, also known as a video card or graphics card, is the
interface between the computer and the display device. Its main components are
the graphical processing unit (GPU), the video BIOS, the video memory and the
random access memory digital to analogue converter (RAMDAC). The GPU
is a fast, specialized processor optimized for graphics and image processing
operations. The video memory holds the data to be displayed. The capacity of
516
CHAPTER 14
Colour CRTs use three different phosphors which emit red, green and blue
light, respectively. The phosphors are packed together in clusters called triads
or in stripes. Colour CRTs have three electron guns, one for each primary colour.
Each guns beam reaches the dots of exactly one type of phosphor. A mask close
to the screen absorbs electrons that would otherwise hit the wrong phosphor. The
triads or stripes are so small that the intensities of the primary colours merge in
the eye to produce the desired colour.
518
CHAPTER 14
Fig.14.3. A diagram of the pixel layout. Each liquid crystal pixel is connected to a transistor
which provides the voltage that controls the brightness. The pixel is addressed by a rowcolumn
scheme.
520
CHAPTER 14
the fact that the nuclear medicine image is a low count image with considerable
statistical fluctuations, making the comparison of tiny intensity differences
meaningless. A main difference to diagnostic X ray reporting is the fact that
colour in the image was recognized early as a helpful technique to improve
diagnostic reading and a tradition of visualization in colour has been established.
Therefore, images and analysis of results, in particular curves and functional or
metabolic images, are preferably displayed using colours. The display is usually
done on workstations with special nuclear medicine software and using current
COTS colour LCD screens as standard display devices. Typical screen sizes
are from 20 to 24 in and native display resolutions from 10241280 pixels
to 12001600 pixels. Depending on the capabilities of the nuclear medicine
display workstations software, several monitors can be used simultaneously.
The need to be able to perform concurrent diagnostic reporting on X ray
images, generated by dual mode acquisition techniques such as PET/CT and
SPECT/CT, and the inclusion of images from other modalities via PACS in the
reporting session, requires the use of grey scale display devices of diagnostic
quality (primary devices) at the nuclear medicine display workstation.
Both CRT and LCD display devices are available with spatial resolution
and CRs satisfying the requirements for a primary device. LCD displays are
rapidly replacing CRT displays for several reasons:
LCDs typically have about twice the brightness of CRTs. An overall
brighter image is less sensitive to changes in the level of ambient light and
is preferred for reporting.
LCD monitors exhibit no geometric distortion.
LCDs have a weight that is about one third of that of a comparable CRT.
LCDs are less prone to detrimental ageing effects.
LCDs are less expensive.
Furthermore, high quality colour LCD devices can be used as grey scale
primary devices, which is not feasible for a colour CRT monitor.
14.4.1. Grey scale standard display function
Todays ubiquity of PACSs enables deployment of display devices at all
locations where access to medical images is needed. The main challenge when
using different display devices in a PACS is to ensure that an image presented to
an observer appears identical irrespective of the display device used, be it a CRT
based or LCD based soft copy display, or hard copy displays, such as film laser
printers or paper printers. The Digital Imaging and Communications in Medicine
(DICOM) grey scale standard display function (GSDF) offers a strategy that
522
Luminance (cd/m2)
FIG.14.4. Digital Imaging and Communications in Medicine (DICOM) grey scale standard
display function.
CHAPTER 14
Luminance (cd/m2)
0.0500
0.0547
0.0594
0.0643
1021
3941.8580
1022
3967.5470
1023
3993.4040
Note: The relative difference between the luminance of consecutive just noticeable difference
indices is much higher for low indices (~9%) than for high indices (~0.6%).
deviations of the characteristic curve of a specific display system from the GSDF
may be implemented directly in the display device or in the video memory of the
display controller. The result of the transformation is that the modified DDLs
operating the display will generate a characteristic curve identical to the GSDF.
FIG.14.5. Mapping of digital driving level Ds to the value Dm, so that for input level Ds which
should produce the standard luminance value Ls the transformed value Dm will produce the
correct luminance as given by the grey scale standard display function.
CHAPtER 14
another frequently used colour space is the red, green, blue (rGb) space, a
natural colour space for a crT or lcd colour monitor. it uses as coordinates the
intensities of the red, green and blue primary colours to generate a colour pixel
(fig. 14.7).
The colour space used for hard copy printers is the cyan, magenta, yellow,
key (black) (cMyk) space.
The quality of the colour image depends on the colour depth (the range of
colour intensities) with which each subpixel contributes. colour quality increases
with subpixel depth. a common classification of the display controllers ability
to reproduce colours is: 8 bit colour (can display 256 colours), 15/16 bit colour
(high colour: can display 65 536 colours), 24 bit colour (true colour: can display
16 777 216 colours) and 30/36/48 bit colour (deep colour: can typically display
over a billion colours).
526
Gray line
FIG. 14.7. Red, green, blue colour cube with a grey line as diagonal. The number of (r, g, b)
voxels, i.e. colours available, depends on the bit depth of each coordinate. A depth of 8 bits for
each component would result in 16 777 216 colours.
a nuclear medicine display controller can usually handle true colour pixel
depths, with 8 bits available for each primary colour.
colour was utilized in nuclear medicine already at an early stage of the
development of digital displays. since the original image data contain no colour
information, the allocation of colour to an image pixel can be freely chosen. The
allocation takes the form of a cluT. conceptually, the cluT is an array structure
containing the colour coordinates for each colour included in the table. a colour
is defined by three values representing the intensities of the red, green and blue
subpixels. each pixel intensity in the image maps to an array index of the luT, so
that each intensity is associated with a particular colour. This is accomplished by
a transformation algorithm. The transformation is usually carried out by the GPu
of the display controller. The cluT is stored in the memory of the graphics card.
The luT is usually much smaller in size than the image. usual cluTs contain
64256 elements, respectively colours. an advantage of a cluT is that colours
can be changed by changing the luT, resulting in better display performance.
it is worthwhile noting that for a real world colour image, the colour of each
pixel is determined by the image itself and cannot be arbitrarily associated with
a colour such as is the case for pseudo-colour display. Thus, the quality of a real
world image increases the larger the number of colours that can be reproduced.
using a cluT for a colour image of the real world implies a loss of quality, as
527
CHAPTER 14
can easily be seen on images on the Internet which use CLUTs with typically
64 colours to save on image size. The addition of colour information to native
nuclear medicine and X ray images always results in a pseudo-colour image, with
the colours chosen by the user.
A modern nuclear medicine system typically uses 1632 different
CLUTs. The choice of colours is a complex issue. A continuous colour scale
can be achieved if the individual components vary slowly and continuously.
Pseudo-colour can be used to increase the PDR relative to grey scale; other
CLUTs may emphasize regions with a specific intensity as, for example, in
the case when performing a Fourier analysis of the beating heart to highlight
amplitude and phase information.
14.5.1. Colour and colour gamut
As with grey scale images, it is expected that a colour image displayed on
a PACS display device has the same colour appearance regardless of the type or
the individual characteristics of the display device. Fortunately, the problem of
producing digital colour images with the same perception of colours regardless
of the display device, including display monitors and hard copy printers, has
already been resolved by the printing industry and the photographic industry.
Since each colour is a unique entity, it is to be expected that unambiguous
transformations exist between the coordinates representing the colour in different
colour spaces. Such transformations are indeed available and are the basis of a
CMS. The purpose of a CMS is to produce a colour image that is perceived as
being the same by a human observer regardless of the output device used.
The gamut or colour gamut is defined as the entire range of colours
a particular display device can reproduce. The gamut depends on the type of
display and on design characteristics. The number of vertices of the gamut is
given by the number of primary colours used to compose a colour. In the case
of an LCD or a CTR monitor, the three primary colours, red, green and blue,
are used to produce a colour. For a printer, the colours of several inks or dyes
can be mixed to produce a colour on paper. Most printers can create dots in
a total of six colours which are cyan, yellow, magenta, red (which combines
yellow and magenta), green (yellow plus cyan) and blue (cyan plus magenta).
Typical gamuts for an LCD monitor and for a printer are shown in Fig.14.8. It
is obvious that the monitor can display colours unavailable to the printer and
vice versa. The International Color Consortium (ICC) has published procedures
including colour transformations (CMS) that ensure that a colour image that is
displayed on, for example, a monitor has the same appearance on, for example,
a colour printout [14.6]. The system is based on describing the colour properties
of a colour display device by an ICC colour profile. The colour profile contains
528
FIG. 14.8. Typical gamuts for a liquid crystal display monitor and a colour printer. The large
non-overlapping areas of the colours which cannot be reproduced by the other device and
must be substituted by similar colours should be noted.
529
CHAPtER 14
FIG. 14.9. Transverse CT slice at the height of the heart (left), with the corresponding intensity
histogram, using only the pixels within the region surrounding the trunk. The number of bins
is 256. Even when excluding all background pixels, the unequal distribution of intensities
is seen, especially the lack of high intensity values which is in agreement with the small
proportion of bony structure in the image.
(14.1)
Q is normally in the range 0...255. The transformations do not take into account
the values of surrounding pixels; each pixel is processed independently.
530
r = 0, I < t
Q, I > t + w
(14.2)
with = Q/W.
Windowing and thresholding may be hardware implemented, i.e. the
values may be changed by turning knobs on the monitor or the console or, more
frequently, by software implementation using mouse movements, sliders or the
arrows on the keyboard of the display workstation. The diagnostic value offered
Frequency
Bone window
Mediastinal window
Lung window
Hounsfield units
FIG. 14.10. CT slice from Fig. 14.9 with typical lung and mediastinal windowing (upper row
from left to right), a bone window (bottom left) and a histogram with corresponding linear
window functions (bottom right).
531
CHAPTER 14
(14.3)
where
CDF(I) is the cumulative density function of the original image;
Q
is the available range of grey scale values;
and the image size is MN pixels.
For more details, see Ref.[14.7]. Figure14.11 demonstrates the effect of
histogram equalization using the standard algorithm of the image processing
software package ImageJ [14.8] on the CT slice of Fig.14.9. In the processed
image, the structures of both the bronchi of the lung and the ribs are visualized
in the same image without underflow or overflow and with approximately the
same information content as in the three windowed images of Fig.14.10 together.
The drawbacks of the method are that the visual appearance of an image depends
on the shape of the histogram and may, therefore, be significantly different
between patients, and the fact that the resulting intensity data can no longer be
532
used to extract quantitative information. The latter is nicely shown by the range
of intensity values in the histogram of fig. 14.11, which no longer shows the
familiar range of cT numbers.
FIG. 14.11. CT slice from Fig. 14.9 after histogram equalization with a corresponding intensity
histogram, using only the pixels within the region surrounding the trunk. The number of bins
is 256. The cumulative density function is now approximately linear. The intensity values are
no longer related to Hounsfield units. Owing to the better distribution of the intensity values,
all of the structures of interest, including the bone and the bronchi of the lung, are visualized
simultaneously.
CHAPtER 14
FIG. 14.12. Orthogonal views of myocardial perfusion SPECT with orientation of the slices
along the long axis of the heart. The upper row shows an original transaxial slice through
the myocardium with the white line indicating the long axis (left) and a sagittal slice (right).
The bottom row shows the reoriented views with the vertical and horizontal slices through
the long axis, and a slice perpendicular to the long axis (from left to right). (Courtesy of
B. Knig, Hanuschkrankenhaus, Vienna.)
of possibly modified image voxels on the display screen; these techniques will
not be considered further here. Details can be found, for example, in Ref.[14.9].
The dominant ray casting geometry in nuclear medicine applications and in dual
mode imaging is parallel projection. Perspective projection is predominantly
used in virtual endoscopy and is not yet used routinely in dual mode imaging.
14.7.2.1. Transmission type volume rendering
Maximum intensity projection (MIP) consists of projecting the maximum
intensity value encountered along the trajectory of the ray through the data volume
on the corresponding screen pixel. It improves the visualization of small isolated
hot areas by enhancing the contrast (Fig.14.14). MIP is successfully employed
for lesion detection in PET oncological whole body studies. Its efficiency for the
detection of lesions is further increased by displaying the MIP projections as a
sequence of projection angles in cine mode.
Eye
Ray casting
Image data volume
FIG.14.13. Principle of ray casting and splatting geometry. In ray casting, the ray collects
intensity transformation throughout its trajectory. A voxel is usually hit outside its centre
which has to be corrected for by interpolation. Splatting starts from the centre of a voxel and
distributes its intensity on several screen pixels.
CHAPtER 14
radiographs may be used to compare lesion extensions with planar X ray images
of the patient.
FIG. 14.14. A maximum intensity projection (right) compared to a standard coronal slice.
A solitary lesion is clearly visible in the maximum intensity projection image (arrow) while it is
missing in the coronal standard view.
CHAPTER 14
FIG.14.15. Surface of a skull from CT image data using (from left to right) maximum intensity
projection, voxel gradient shading and volume compositing for rendering. Rendered images
were produced with ANALYZE 9.0.
profile representing the apex at the innermost position, with each following
profile surrounding the previous one. The resulting display is referred to as a
bulls-eye display or polar map (Fig.14.17). The latter name refers to the fact that
the intensity along a given annulus can easily be handled by a polar coordinate
system. The intensities displayed for each annulus correspond to the myocardial
perfusion in that slice or a segment thereof. Absolute perfusion values cannot be
derived from the intensities. The method to obtain an estimate of the degree of
hypo-perfusion and of the location of the perfusion defect consists of comparing
the relative intensity values in different segments of the annuli to the maximally
perfused segments of the same patient, and then comparing the pattern of relative
perfusion of the individual study with normal perfusion patterns. This permits an
estimate of both the degree and extent of the perfusion defects as well as a good
anatomical allocation to the coronary arteries causing the hypo-perfusion.
539
CHAPTER 14
FIG.14.17. Bulls-eye displays of myocardial SPECT perfusion studies. Normal perfusion (left)
and hypo-perfusion of inferior wall (right). The colours indicate the degree of perfusion, with
white normal, orange acceptable, red hypo-perfused, and green no perfusion. Also
indicated are the perfusion areas for the main coronary vessels LAD, LCX and RCA. (Courtesy
of B. Knig, Hanuschkrankenhaus, Vienna.)
When using the native grey scale images for both modalities, it is difficult
to distinguish clearly which intensity comes from which modality. The composite
display becomes much easier to interpret if one of the images uses a CLUT. In
this case, the formula has to be applied to each colour component separately:
RCS(m, n)=RBG(m, n) + (1 )IFG(m, n) (14.5)
GCS(m, n)=GBG(m, n) + (1 )IFG(m, n)
(14.6)
FIG.14.18. PET/CT fused image display with a PET image on the left showing a hot lesion
at the border between the lung and rib cage. The fused image in the middle shows the location
inside the lung close to the pleura; the CT image on the right confirms and permits close
inspection of the position. Linked cursors point to the position of the lesion. The transparency
factor is 0.5.
CHAPTER 14
Test
Equipment
Patterns
Luminance response
TG18-LN
TG18-CT
TG18-MP
Luminance dependencies
Luminance meter
TG18-UNL
TG18-LN
TG18-CT
Reflection
TG18-AD
Resolution
TG18-QC
TG18-CX
TG18-PX
Geometric distortions
TG18-QC
Noise
None
TG18-AFC
Veiling glare
TG18-GV
TG18-GVN
TG18-GQs
Chromaticity
Colorimeter
TG18-UNL80
geometric distortions and for resolution are more important for CRTs, whereas
the dependence of luminance on the viewing angle is important only for LCD
displays.
In addition, Ref.[14.13] recommends that a daily check prior to clinical
work be performed by the user. It consists of evaluating anatomical test images
or a suitable geometrical test image such as a TG18-QC test image (Fig.14.19)
to verify adequate display performance. The instructions for assessing the quality
of a display device when using the TG18-QC test pattern are given in Box 14.1.
543
CHAPTER 14
FIG.14.19. Test pattern TG18-QC suitable for daily quality control of display monitor
performance using visual inspection [14.13].
Box 14.1. Instructions for visual assessment of image quality using the
TG18-QC test pattern as part of daily quality control by the user [14.23]
1. G
eneral image quality and artefacts: Evaluate the overall
appearance of the pattern. Note any non-uniformities or
artefacts, especially at black-to-white and white-to-black
transitions. Verify that the ramp bars appear continuous
without any contour lines.
2. G
eometric distortion: Verify that the borders and lines of the
pattern are visible and straight and that the pattern appears
to be centered in the active area of the display device. If
desired, measure any distortions (see section 4.1.3.2).
3.
Luminance, reflection, noise, and glare: Verify that all
16 luminance patches are distinctly visible. Measure their
luminance using a luminance meter, if desired, and evaluate the
results in comparison to GSDF (section 4.3.3.2). Verify that
the 5% and 95% patches are visible. Evaluate the appearance
of low-contrast letters and the targets at the corners of all
luminance patches with and without ambient lighting.
4. R
esolution: Evaluate the Cx patterns at the center and corners
of the pattern and grade them compared to the reference score
(see section 4.5.3.1). Also verify the visibility of the
line-pair patterns at the Nyquist frequency at the centre and
corners of the pattern, and if desired, measure the luminance
difference between the vertical and horizontal high-modulation
patterns (see section 4.5.3.1).
544
545
CHAPTER 14
546
CHAPTER 15
DEVICES FOR EVALUATING IMAGING SYSTEMS
O. DEMIRKAYA, R. AL-MAZROU
Department of Biomedical Physics,
King Faisal Specialist Hospital and Research Centre,
Riyadh, Saudi Arabia
15.1. DEVELOPING A QUALITY MANAGEMENT SYSTEM
APPROACH TO INSTRUMENT QUALITY ASSURANCE
A quality management system (QMS) has three main components:
(a) Quality assurance (QA);
(b) Quality improvement;
(c) Quality control (QC).
The aim of a QMS is to ensure that the deliverables meet the requirements
set forth by the users. The deliverables can be, in general, all the services provided
in a nuclear medicine department, and the diagnostic imaging services in
particular. In this section, the primary focus is the diagnostic imaging equipment
and images produced by them.
15.1.1. Methods for routine quality assurance procedures
QA is a systematic programme for monitoring and evaluation of the process
of production. It is an all-encompassing management plan to ensure the reliability
of the production system. QA in diagnostic imaging, however, can help minimize
the uncertainties and errors in equipment performance by supervising the entire
image production process. This, in turn, will guarantee that the images generated
are of diagnostic quality. QA can also help identify and rectify the problems,
errors and malfunctioning and drifting of the performance earlier. Moreover, a
QA programme can help the standardization of the image production process
across centres and, thus, allows comparison of clinical results with other centres.
This is especially imperative in multicentre clinical trials. A QA programme in
nuclear medicine involves all aspects of nuclear medicine, including minimizing
the exposure to personnel, patients and the public; preparation, safety, sterility
547
CHAPTER 15
549
CHAPTER 15
gamma cameras require very close attention and, therefore, more frequent and a
larger number of tests than any other diagnostic imaging modality in radiology.
One of the important QC tests that has to be carried out daily on every gamma
camera is the uniformity test. This test shows the current status of the gamma
camera and allows monitoring of any possible deterioration in the performance
of the camera. It can also signal whether there has been any malfunctioning in
the detector elements, such as the photomultiplier tubes or the crystal, since the
last QC test was conducted. These assessments can be performed qualitatively or
quantitatively by a computer program.
15.2.1.1. Point source holders
This phantom is used to hold point sources that are employed in intrinsic
uniformity, resolution and linearity measurements. It is made up of lead and
its main purpose is to shield the walls, ceiling and personnel, and collimate the
radiation to the detector. Figure 15.1 shows a picture of a source holder. Copper
plates (12 mm thick) should be placed in front of the source holder to act as
absorbers and stop the low energy photons. When placed on the floor, source
holder height can be adjusted such that the point source is directed to the centre
of the detector under investigation.
FIG.15.1. Point source holders in a slanted position so that they can point to the detectors
from the floor.
CHAPTER 15
may contain 56Co and 58Co impurities. These radionuclides have a shorter
half-life (77.234 and 70.86 d, respectively) than that of 57Co (271.74 d) and emit
high energy rays (>500 keV). If the impurities result in non-uniformities, the
sources can also be left to decay for a while before being used. It is advisable to
place the sheet source at a distance of 510 cm from the collimator during the
scan. Figure 15.2 shows a commercial 57Co flood source.
552
FIG.15.3. Top: picture of the slit phantom designed for a cardiac camera whose field of view
is smaller than that of a typical gamma camera. Bottom: acquired images of the slit phantoms
for a typical gamma camera to measure the resolution in the Y (left image) and X (right image)
directions. The white vertical and horizontal lines denote the image of 1 mm slits.
553
CHAPTER 15
FIG.15.4. A custom built, dual-line source phantom. On the left is the phantom positioned
on the detector, and on the right the same line sources are immersed in a scattering medium
consisting of sheets of Perspex.
The line is filled with 99mTc activity solution with a concentration of about
550 MBq/mL (15 mCi/mL) to achieve an adequate count rate when used with the
scattering medium. When measuring X and Y resolutions, the lines are placed
parallel to the Y and X directions, respectively. In both cases, one of the lines
should be positioned in the centre of the field of view (FOV). The acquired image
should have at least 1000 counts in the peak channel of the line spread function.
To measure the extrinsic resolution with scatter, the dual-line source is
embedded into Perspex sheets, 10 cm of which are placed between the collimator
and the line sources and 5 cm placed above the lines as seen in Fig. 15.4. The
Perspex sheets under the sources create a scattering medium and the ones above
a backscattering medium. For a perfect contact between the sheet and the line
sources, it is recommended to make two grooves, through which the lines run, in
one of the sheets to insert the two lines.
15.2.1.6. Bar phantom
The second most frequent QC test in nuclear medicine is the resolution
test performed with bar phantoms. Bar phantoms can be used to measure,
554
FIG.15.5. Left: picture of a typical four-quadrant rectangular bar phantom. Middle: image
of the left bar phantom acquired by an ECAM gamma camera. Right: image of a bar phantom
acquired with an ADAC FORTE gamma camera. Both images were acquired at a matrix size
of 512 512 and with a total count of 10 Mcounts.
CHAPTER 15
in terms of the full width at half maximum (FWHM) of the line spread function
can be approximately determined as FWHM 1.7Sb, where Sb is the size of the
smallest resolvable bars.
15.2.1.7. Dual-line phantom for whole body imaging
This phantom is used to test the whole body resolution of a gamma camera
system. It consists of two parallel line sources which are 1 mm in internal
diameter and 10 cm centre to centre. Figure 15.6 shows a custom built, dual-line
phantom. The line is usually filled with 99mTc activity with a concentration of
about 370 MBq/mL (10 mCi/mL) to achieve an adequate count rate. During the
testing, the line sources are placed at a distance of 10 cm from both collimators.
When measuring the perpendicular resolution, the lines should be placed parallel
to the bed direction with one of them being in the centre of the bed. When
measuring the parallel resolution, the lines should be positioned perpendicular to
the direction of the bed movement. The whole body resolution is calculated from
the FWHMs of the line profiles extracted from the image of the dual-line sources.
should be measured. The residual activity is subtracted from the initial activity
to determine the net activity injected into the dish. This dish should be placed
at a distance of 10 cm from the face of the collimator. It is recommended to
acquire two images. The average count, in units of counts per megabecquerel per
second or counts per minute per microcurie, is determined to measure the planar
sensitivity of the system.
15.2.1.9. Multiple window spatial registration phantom: lead-lined point source
holders
A multiple window spatial registration test measures the cameras ability to
position photons of different energies. In this section, the phantom is discussed,
as described in Ref. [15.6], together with its preparation and the measurement
procedures. The details of the test conditions and test phantoms can be found
in Ref. [15.6]. A schematic drawing of the lead phantom is given in Fig. 15.7.
As suggested by NEMA, nine of these lead-lined source holders are placed on
the surface of the detector. The relative position of each holder is shown in the
drawing. Plastic vials, as seen in Fig. 15.7, can be used to hold the actual activity
of 67Ga (~711 MBq (200300 mCi) in each). Other acquisition parameters and
camera settings are given in Table 15.1.
50 mm
25 mm
5 mm
Liquid Ga-67
source
0.8*UFOV
0.4*UFOV
90 mm
25 mm
5 mm diameter
hole
UFOV
FIG.15.7. Multiple window spatial registration phantom lead-lined point source holders. On
the right is the top view of the point sources or source holders placed on the detector crystal.
The locations of the point sources are determined by multiplying the dimensions of the useful
field of view (UFOV) by 0.4 and 0.8. On the left is the cross-sectional view of the source holder
together with the source vial.
557
CHAPTER 15
Images of nine (or four) point sources of 67Ga are acquired normally at
three different photopeak energy windows (the three photopeaks for 67Ga are 93,
185 and 296 keV).
The aim of the subsequent calculation is to find the centroids of these
points in the image acquired at different energy windows and to compare the
displacement between the point source images acquired at different energy
windows. The maximum displacement between the centroids of point sources
is the performance parameter indicating the error in multiple window spatial
registration. The details of the calculation of this performance parameter can be
found in Ref. [15.6].
TABLE15.1. IMAGE ACQUISITION AND CAMERA SETTINGS
67
Radionuclide
Ga
activity
Total counts
Energy window
15%
Count rate
<10 kcounts/s
Pixel size
<2.5 mm
Matrix size
~1024 1024
axial extents should be similar in length. Their maximum dimension (the axial
extent of the activity) should not exceed 2 mm. The activity in the point sources
should not vary more than 10%. The point sources should be suspended in air
and positioned in accordance with the suggestions in Ref. [15.6] (Fig. 15.8). An
alternative practical solution to suspend the point sources in air is to mark the
positions of the point sources on a thin paper attached to a polystyrene (widely
known as Styrofoam) sheet, and use this as a source holder. The scatter caused by
the holder should be negligible.
FIG.15.8. Top and side views of the position of the point sources as suggested by the National
Electrical Manufacturers Association.
559
CHAPTER 15
Front view
Side view
75 5 mm
FIG.15.9. Schematic drawing of the front and side views of a triple-line source phantom.
FIG.15.10. A commercial triple-line source phantom with three line sources inside. The tank
is filled with water to simulate a scattering medium.
The line sources should all be emptied of the decayed solution left from the
previous test using two empty syringes attached at both ends of the line source.
During the injection of each line source, two syringes are attached to both ends,
one empty and one with activity of a concentration around 300500 MBq/mL.
While pushing the plunger of the syringe with the activity, that of the empty
syringe should also be pulled very slowly until 99mTc solution appears from the
other end. The filled line source should be securely sealed from both ends with
the original caps, ensuring that there is no leak. It should also be ensured that the
entire line source is uniformly filled.
560
99m
Tc
<20
100
Scan time/view
~5 s at 20 kcounts/s
Energy window
15%
Collimator
Radius of rotation
Total number of views
Pixel size
CHAPTER 15
phantom is filled with water uniformly mixed with a known amount of activity
(approximately 350 MBq) of 99mTc. The activity amount should be such that
the count rate at the photopeak energy window is 10000 2000 counts/s. The
following parameters have to be accurately determined and recorded to calculate
the volume sensitivity:
Volume of the phantom;
Pre- and post-injection syringe activity to determine net injected activity;
Elapsed time half way through the SPECT acquisition;
Total scan time.
Further details of the measurement and calculations can be found in
Ref. [15.6].
15.2.2.4. Total performance test phantoms
Image quality measures or overall SPECT system performance, such as
noise, tomographic uniformity, contrast and lesion detectability, are measured
using total performance phantoms. These phantoms are commercially available
and are not so easy to build in an institutional workshop. There are several
commercial phantoms for this purpose. Some of the phantoms that are frequently
used to assess the performance of a SPECT system are discussed. It should be
noted that these phantoms can also be used to evaluate PET systems.
15.2.2.5. Carlson phantom
The Carlson phantom (designed and developed by R.A. Carlson, Hutzel
Hospital, Detroit, MI, USA, and J.T. Colvin, Texas Oncology PA, Dallas,
TX, USA) in this category is frequently used for evaluating the tomographic
uniformity, image contrast, noise and linearity. The main source tank
(see Fig. 15.11) is made of acrylic with dimensions: 20.32 cm inside diameter,
21.59 cm outside diameter and 30.48 cm length. The phantom comes with
various inserts, which are demonstrated and described in Fig. 15.11, to evaluate
the performance parameters noted above. The thick plastic screws on the top lid
allow easy filling and draining of the tank with water. The 99mTc solution injected
inside the tank serves as the background activity, which may vary between 300
and 550 MBq, depending on the collimator used [15.7].
There is an insert or section for each performance measure. The
SPECT uniformity is assessed using the uniform section of the phantom. The
non-uniformities in the gamma camera can result in severe ring or bulls-eye
artefacts. These artefacts can be checked for by looking at the uniform transverse
562
slices. The amount of noise can be quantitatively calculated from the uniform
section.
15.2.2.6. Jaszczak circular and elliptical phantoms
Similar to the Carlson phantom, Jaszczak elliptical and circular phantoms
are used to evaluate the overall performance of SPECT systems after a repair
or preventive maintenance, or during acceptance testing or quarterly testing. In
addition to the purposes above, these phantoms can be used in evaluating the
impact of reconstruction filters on resolution, as well as for other purposes in
research studies.
Jaszczak phantoms consist of a main cylinder or tank made of acrylic
with several inserts (see Fig. 15.12). They are manufactured and sold by Data
Spectrum Corporation (NC, USA). Jaszczak phantoms, which may have circular
or elliptical tanks, come in several different flavours. The cylinders of all
models of the circular flanged phantoms have the same physical specifications:
21.6 cm inside diameter, 18.6 cm inside height and 3.2 cm wall thickness. The
principal differences between the different models of the flanged cylindrical
Jaszczak phantoms are the diameters of the rods and solid sphere inserts. The
circular phantom has flanged and flangeless models. The latter is recommended
by the American College of Radiology for accreditation of nuclear medicine
departments. These different models are designed to test a range of systems, from
low resolution to ultra-high resolution, which has rods and spheres smaller than
the others.
All Jaszczak phantoms have six solid spheres and six sets of cold rods. In
flanged models, the sizes of the spheres vary. The number of rods in each set
depends on the size of the rod in that set as different models of the phantom have
rods of different sizes. In flangeless models, the diameters of the spheres are 9.5,
12.7, 15.9, 19.1, 25.4 and 31.8 mm, while the rod diameters are 4.8, 6.4, 7.9, 9.5,
11.1 and 12.7 mm. Both solid spheres and rod inserts mimic cold lesions in a hot
background. Spheres are used to measure the image contrast while the rods are
used to investigate the image resolution in SPECT systems.
15.2.2.7. Anthropomorphic torso phantoms
Anthropomorphic torso phantoms are used in testing gamma cameras in
SPECT mode to evaluate data acquisition, attenuation correction and image
reconstruction methods. They normally simulate or model the upper torso of
the body (from the heart down to the diaphragm) of an average male or female
patient. These phantoms consist of a body-shaped (elliptical) cylinder with
fillable inserts for organs such as the heart, lungs and liver (see Fig. 15.13).
563
CHAPTER 15
Phantom
Description
Hot lesions
Linearity/uniformity section
Crossed grid of cut out channels, again in
an acrylic block, can be used to assess the
linearity. The region where only background
activity is available is used to evaluate the
tomographic or SPECT uniformity.
564
FIG.15.12. Jaszczak phantom used for verifying image quality (phantom by Data Spectrum
Corporation, USA).
Defects can also be added to the heart insert. Lung inserts are filled with
Styrofoam beads and water to emulate lung tissue density. The phantoms can be
used to evaluate non-uniform attenuation correction methods including CT based
attenuation correction in SPECT/CT systems and scatter compensation methods.
When used with the optional cardiac insert, cardiac SPECT data acquisition and
reconstruction methods may also be evaluated.
Filling the inserts with different distributions of radioactivity is not as
easy as filling other phantoms because of the multiple organs and the organ
to background ratios that need to be adjusted. To set the concentration ratios,
the volumes of the organ inserts need to be measured accurately a priori. For a
simulation of a 1110 MBq (30 mCi) sestamibi stress study, the injected activity
concentrations, as suggested in Ref. [15.8], are given in Table 15.3.
Torso phantoms can be integrated with the fillable breast phantom, which
is also commercially attainable. These breast phantoms allow the inclusion
of inserts to simulate breast lesions that can be employed to evaluate lesion
detectability.
The volumes in the second column in Table 15.3 are the measured volumes
of the torso phantom inserts.
565
CHAPTER 15
Activity concentration
(kBq/mL)
Total activity
(MBq)
Heart
117
250
30
Tissue
8620
25
225
Liver
1177
150
175
Section
Lungs
FIG. 15.13. A commercial anthropomorphic phantom and a transaxial slice cutting through
the heart and lungs from its image acquired by a SPECT/CT system.
566
FIG. 15.14. Three dimensional Hoffman phantom with a water fillable cylinder and layers of
inserts (phantom by Data Spectrum Corporation, USA).
FIG. 15.15. Defrise hot spot phantom manufactured by Data Spectrum Corporation, USA.
567
CHAPTER 15
568
FIG.15.17. Transaxial (top left) and coronal (bottom left) cross-sectional view of the image
quality (IQ) phantom through the centres of fillable spheres. Sphere diameters and the other
dimensions are given in millimetres (reproduced with permission). On the right is the schematic
drawing demonstrating the positioning of the IQ phantom together with the scatter phantom.
filling is also done through the capillaries without removing the cover lid.
Filler screws for each fillable part inside the body phantom allow easy
access. A picture of the phantom is shown in Fig. 15.18.
(b) Cylindrical insert: A cylindrical section that is filled with a mixture of
polystyrene beads and water to mimic lung (the density of which is around
0.3 0.1 g/mL) attenuation is placed axially in the centre of the phantom
with the same length as the body phantom. The outside diameter of the
insert is about 5 cm.
(c) Phantom preparation: Table 15.4 shows the measured volumes of the
various inserts and the torso cavity of the IQ phantom. It is suggested that all
the volumes be measured upon acquiring a new IQ phantom. The activities
used to fill the phantom should be measured using a calibration time that
corresponds to the planned PET acquisition time, taking into account the
time necessary for the preparation and positioning of the phantom for
this test. Table 15.4 shows the typical activity concentrations that may be
prepared and injected into the background and the hot spheres in order to
have the proper activity concentration at the time of the scan (supposed to
be performed 45 min after phantom preparation). It should be noted that the
activity concentration ratio in the table is 8:1. A 4:1 activity concentration
ratio can be easily obtained by doubling the amount of activity in the
background.
569
CHAPTER 15
Volume (mL)
Typical
activity
(MBq)
9700
n.a.
Different sizes
n.a.
Different sizes
n.a.
Lung insert
353
n.a.
Background
(torso all inserts)
9286
65
Phantom section
Torso cavity
Activity
concentration
at time of
preparation
(kBq/mL)
Activity
concentration
at time of
scan
(kBq/mL)
56
42.4
5.3
Note: n.a.: not applicable. The scan is supposed to be performed 45 min after phantom
preparation. For a description of the phantom, please see: http://www.spect.com/pub/
NEMA_IEC_Body_Phantom_Set.pdf
570
CHAPTER 15
FIG.15.19. Positioning of the scatter phantom on the patient bed: transaxial view (left);
picture of the National Electrical Manufacturers Association scatter phantom (right).
CHAPTER 15
Axial direction
10 cm
AFOV/4
FIG.15.22. Three capillary point sources mounted on a point source holder used in PET to
measure spatial resolution.
574
CHAPTER 15
First, high resolution images of the body need to be acquired. Then, the individual
organs and structures are segmented from the high resolution images. The
segmentation is the most challenging task as the boundaries between organs and
tissues are often not well defined. Researchers, therefore, resort to tedious manual
or semi-automated segmentation methods. Obtaining CT scans of desired pixel
resolution or dimension and slice thickness may result in a significant amount
of exposure to ionizing radiation; thus, it is difficult to recruit healthy subjects
for this purpose. As a result, some of the voxel models have been constructed
from medical images of patients. For example, the Zubal phantom [15.15] was
created from CT scans of a patient by manual segmentation. These limitations on
pixel dimensions and slice thickness have made cadavers an attractive choice for
building voxel based models. In voxelized models, the surface of the organs are
jagged, piece-wise continuous and, therefore, not smooth. Other issues, such as
shifting of internal organs and non-rigid transformations in organ shape during
the scan in the supine position, may limit the generality of these models.
Hybrid models combine the best of both worlds. Surfaces of the segmented
structures in voxelized models are defined by mathematical formulations used to
define irregularly shaped surfaces such as 3-D B-spline surfaces.
A group of researchers developed a series of 3-D and 4-D computational
models. Their first model, the mathematical cardiac torso phantom, was a
mathematical model based on simple geometric primitives but also used
cut-planes and overlaps to create complex biological shapes to be used in nuclear
medicine research. This model also included a beating heart based on gated MRI
patient data and a respiratory model based on known respiratory mechanics. With
this model, emission and transmission data could be simulated. The following
models, 4-D NCAT and cardiac torso (XCAT) (see Fig. 15.23), were based on
the visible human CT dataset. The organ shapes, i.e. surfaces, were reconstructed
using the primitive non-uniform rational B-spline (NURBS) surfaces. The 4-D
models use cardiac and respiratory motions developed using 4-D tagged MRI
data and 4-D high resolution respiratory-gated CT data, respectively. These
models, from the hybrid class, can successfully model not only the anatomy but
also physiological functions such as respiratory and cardiac motion.
Such 4-D models can be used to accurately simulate SPECT and PET
images of the torso and can be particularly helpful for optimizing image
acquisition protocols and image reconstruction algorithms, and understanding the
various effects of these complex motions on the acquired PET or SPECT images.
These models are also accessible free of charge for academic research.
These models have been widely used in internal absorbed dose calculations
in nuclear medicine or in calculation of dose distribution from external sources
in radiation therapy and in studying issues pertinent to imaging systems and their
576
performance characteristics. They have also been quite helpful in the optimization
of image acquisition protocols and reconstruction methods.
FIG.15.23. Left: initial extension of the 4-D XCAT anatomy. Right: simulated chest X ray
CT images from the extended 4-D XCAT. Coronal (top row) and transaxial (bottom two rows)
reconstructed slices are shown (reproduced with permission from P. Segars).
Since anatomy and physiological functions are accurately known, they can
serve as gold standards. Computational models may be preferred because the use
of physical phantoms leads to unnecessary occupational exposure to radiation,
and the preparation and repetition of the experiments using physical phantoms
can be lengthy and time consuming.
An ideal model should be able to conform, reasonably well, to the size
and shape of the object being represented. Currently, as personalized medicine
is the strong driving impetus for most current research in many pertinent fields,
personalized modelling should be the aim in computational model development
research.
15.3.1. Emission tomography simulation toolkits
15.3.1.1. SimSET
SimSET, first released in 1993 and developed at the University of
Washington, is a simulation package that can simulate PET and SPECT emission
tomography systems using Monte Carlo simulations. It can model the photon
interaction process as well as the imaging detector geometries. SimSET allows
577
CHAPTER 15
578
CHAPTER 15
in PET and the gamma cameras and SPECT systems may require sophisticated
software applications; thus, in such a case, the manufacturer must provide the
calculation software. The documentation for the acceptance test procedures
may be made available by the vendor. If needed, the recommendation of the
manufacturer should be followed, for instance, with regard to the amount of
activity required for each test. In multimodality imaging systems, additional
tests, which are not discussed in the existing guidelines, such as the accuracy of
image registration and attenuation correction, must also be conducted.
Before starting acceptance testing, the following additional issues should
be considered:
An accurate dose calibrator is an essential part of acceptance testing and
must, therefore, be available.
The required amount of radioactivity has to be arranged before starting
acceptance testing, so that the acceptance testing procedure does not
experience any interruption.
Proper calibration of the imaging system prior to acceptance testing is
of paramount importance. Any major erroneous calibration or lack of
calibration may result in an increase in commissioning cost and undue
delays in acceptance testing.
The order of the tests that will be conducted must be arranged so that any
malfunctioning or improper calibration can be discovered early on. This
will minimize the number of tests that must be repeated after recalibration
of the system.
If the medical physicist is not familiar with the system, a vendor
representative who knows how to operate the scanner and how to run the
calculation software should be present during acceptance testing.
All the required phantoms discussed in earlier sections of this chapter
should be made ready and prepared in advance.
15.4.2. Procurement and pre-purchase evaluations
When an institution decides to buy an imaging system, the administration
should start the planning properly by defining the purpose(s) for acquiring the
system and form a committee of a team of professionals to take on all of the
responsibilities from purchasing to setting up the system.
The purchasing committee should include the following professionals, as
defined in Ref. [15.1]:
Nuclear medicine and radiology physicians;
A medical physicist with experience in nuclear medicine;
580
CHAPTER 15
needs. After studying these systems, the required specifications should be set
with the aim of not excluding any available system initially. As a good practice,
one or several Excel work-sheets should be developed. The work-sheet(s)
should list all of the specifications, hardware, performance parameters, imaging
table, standard software, optional software, etc. Under each category, a list
of different specifications in that category should be listed with their limits.
Examples of hardware specifications are crystal(s) dimension and shape, number
of photomultiplier tubes, bore diameter and head movement ranges. Examples
of performance specifications are resolution, uniformity, dead time, SPECT
specifications, noise equivalent count rate and sensitivity. Examples of imaging
table specifications are pallet thickness, attenuation factor, scan range and speed,
minimum and maximum floor clearance, and weight limits. Knowing all of the
software that comes with the system on the acquisition and processing stations
and the optional ones available is necessary at this stage. The work-sheet(s) will
be distributed to all vendors as a soft copy, so that the answers from each will be
rearranged in one sheet to allow easy comparison of each specification between
vendors.
The tender should be prepared by the committee members and should
follow the institutions local regulations. It should include a summary of the
terms and conditions of the new equipment purchase deal. The following items
may be requested in a tender:
Name and model of the equipment.
Terms of pricing; way of payment, site preparation, accessories, etc.
Application specialist training.
System upgrade conditions.
Equipment references; short list of current users of similar system, local or
international.
Training of staff.
Equipment warranty.
Scheduling installation process and way of coordination.
Responsibility of site preparation, including removal of old equipment.
User and engineering manuals and equipment specifications (NEMA and
others).
Acceptance testing to be performed by a medical physicist (the system
should comply with NEMA or local specifications).
Commitments of the vendor to provide maintenance, and spare parts
readiness.
Specifications of local civil work and materials used.
582
Other steps that may assist the committee in the evaluation stage are:
Site visits: Manufacturers take the prospective customers to their reference
sites to evaluate the systems and listen to the users.
Evaluation of the clinical and phantom images provided by the
manufacturers: It is recommended that this be carried out on a common
imaging workstation for an objective comparison of different imaging
systems because each imaging workstation may process images differently
before displaying them on the screen. The medical physicist has to facilitate
the unbiased and blind comparison of the clinical images by the nuclear
medicine physicians.
Surveying centres with similar systems through a written questionnaire can
also be very effective and beneficial.
Inviting the vendor representatives to present their product in detail.
After thorough evaluation of all systems, the committee decides on the
most appropriate system upon considering the cost and other factors such as the
availability of a good maintenance service in the region.
After the system is chosen, the committee should supervise the installation
process. It should help the vendor representative to finalize all of the paper
work and get the access permits to the location. The system should be installed
completely with all the accessories and software ordered.
The local medical physicist or a private consultant should perform the
acceptance testing on the system. The committee should facilitate and make
available all of the necessary resources to the medical physicist to complete the
task and get the system ready for clinical use.
15.4.3. Acceptance testing as a baseline for regular quality assurance
As mentioned in Section 15.4.1, the medical physicist should produce
reference tests during acceptance testing. Tests should be acquired that are easy
to perform with less sophisticated procedures and that can be conducted within
an acceptable period by the user. These tests should reflect the performance of
the system in the working environment. The results of the routine tests should be
compared against the results of these reference tests.
For example, the medical physicist may acquire a five or ten million
counts uniformity image as a reference image for the system uniformity test
during the acceptance period. This is less sophisticated than the usual 30 million
counts uniformity image acquired for the acceptance testing. Another example is
acquiring a 10 million counts image for the bar phantom during the acceptance
testing and considering it a reference image. Some of the results of acceptance
583
CHAPTER 15
584
The results of these tests should meet the specifications set by the
manufacturer, as they are usually one of the main reasons for selecting a particular
system. If one or more test results do not meet the manufacturers specifications,
the test should be repeated carefully. In the case of similar results, the vendor
engineer should rectify the problem at hand and then repeat the calibrations if
necessary.
REFERENCES
[15.1] INTERNATIONAL ATOMIC ENERGY AGENCY, Quality Assurance for PET and
PET/CT Systems, IAEA Human Health Series No. 1, IAEA, Vienna (2009).
[15.2] INTERNATIONAL ATOMIC ENERGY AGENCY, Quality Assurance for SPECT
Systems, IAEA Human Health Series No. 6, IAEA, Vienna (2009).
[15.3] INTERNATIONAL
COMMISSION
ON
RADIATION
UNITS
AND
MEASUREMENTS, Phantoms and Computational Models in Therapy Diagnosis and
Protection, ICRU Rep. 48, Bethesda, MD (1992).
[15.4] INTERNATIONAL
COMMISSION
ON
RADIATION
UNITS
AND
MEASUREMENTS, Phantoms and Computational Models in Therapy Diagnosis and
Protection, ICRU Rep. 44, Bethesda, MD (1992).
[15.5] DEMIRKAYA, O., AL MAZROU, R., Performance test data analysis of scintillation
cameras, IEEE Trans. Nucl. Sci. 54 (2007) 15061515.
[15.6] NATIONAL ELECTRICAL MANUFACTURERS ASSOCIATION, Performance
Measurements of Gamma Cameras, Standards Publication NU 1-2007, NEMA (2007).
[15.7] GRAHAM, L.S., FAHEY, F.H., MADSEN, M.T., VAN ASWEGEN, A.,
YESTER, M.V., Quantitation of SPECT Performance: Report of Task Group 4,
Nuclear Medicine Committee (AAPM Report No. 52), Med. Phys. 22 4 (1995)
401409.
[15.8] NICHOLS, K.J., et al., Instrumentation quality assurance and performance, J. Nucl.
Cardiol. 13 (2006) 2541.
[15.9] HOFFMAN, E.J., CUTLER, P.D., DIGBY, W.M., MAZZIOTTA, J.C., 3-D phantom
to simulate cerebral blood flow and metabolic images for PET, IEEE Trans. Nucl.
Sci. 37 (1990) 616620.
[15.10] NATIONAL ELECTRICAL MANUFACTURERS ASSOCIATION, Performance
Measurements of Positron Emission Tomography, Standards Publication NU 2-2007,
NEMA (2007).
[15.11] BAILEY, D.L., JONES, T., SPINKS, T.J., A method for measuring the absolute
sensitivity of positron emission tomographic scanners, Eur. J. Nucl. Med. 18 (1991)
374379.
585
CHAPTER 15
[15.12] XU, X.G., CHAO, T.C., BOZKURT, A., VIP-Man: an image-based whole-body adult
male model constructed from color photographs of the visible human project for
multi-particle Monte Carlo calculations, Health Phys. 78 (2000) 476486.
[15.13] ZAIDI, H., XU, X.G., Computational anthropomorphic models of the human
anatomy: the path to realistic Monte Carlo modeling in radiological sciences, Annu.
Rev. Biomed. Eng. 9 (2007) 471500.
[15.14] CAON, M., Voxel-based computational models of real human anatomy: a review,
Radiat. Environ. Biophys. 42 (2004) 229235.
[15.15] ZUBAL, I.G., et al., Computerized three-dimensional segmented human anatomy,
Med. Phys. 21 (1994) 299302.
[15.16] INTERNATIONAL ATOMIC ENERGY AGENCY, Quality Control of Nuclear
Medicine Instruments, IAEA-TECDOC-602, IAEA, Vienna (1991).
[15.17] INTERNATIONAL ELECTROTECHNICAL COMMISSION, Medical Electrical
Equipment Characteristics and Test Conditions of Radionuclide Imaging Devices
Anger Type Gamma Cameras, 3rd edn, IEC 60789, IEC, Geneva (2005).
[15.18] INTERNATIONAL ELECTROTECHNICAL COMMISSION, Radionuclide Imaging
Devices Characteristics and Test Conditions Part 2: Single Photon Emission
Computed Tomographs, Edn 1.1, IEC 61675-2, IEC, Geneva (2005).
[15.19] INTERNATIONAL ELECTROTECHNICAL COMMISSION, Radionuclide Imaging
Devices Characteristics and Test Conditions Part 3: Gamma Camera Based
Whole Body Imaging Systems, 1st edn, IEC 61675-3, IEC, Geneva (1998).
[15.20] AMERICAN ASSOCIATION OF PHYSICISTS IN MEDICINE, Scintillation Camera
Acceptance Testing & Performance Evaluation, Report No. 6, AAPM, College Park,
MD (1980).
[15.21] AMERICAN ASSOCIATION OF PHYSICISTS IN MEDICINE, Computer-aided
Scintillation Camera Acceptance Testing, Report No. 9, AAPM, College Park,
MD (1982).
[15.22] AMERICAN ASSOCIATION OF PHYSICISTS IN MEDICINE, Rotating
Scintillation Camera SPECT Acceptance Testing and Quality Control, Report No. 22,
AAPM, College Park, MD (1987).
[15.23] INTERNATIONAL ATOMIC ENERGY AGENCY, Quality Control of Nuclear
Medicine Instruments, IAEA-TECDOC-317, IAEA, Vienna (1984).
[15.24] INTERNATIONAL ELECTROTECHNICAL COMMISSION, Nuclear Medicine
Instrumentation Routine Tests Part 2: Scintillation Cameras and single photon
emission computed tomography imaging, IEC TR 61948-2, IEC, Geneva (2001).
[15.25] INTERNATIONAL ELECTROTECHNICAL COMMISSION, Nuclear Medicine
Instrumentation Routine Tests Part 3: Positron Emission Tomographs, IEC TR
61948-3, IEC, Geneva (2005).
586
CHAPTER 16
FUNCTIONAL MEASUREMENTS
IN NUCLEAR MEDICINE
M.J. MYERS
Institute of Clinical Sciences,
Imperial College London,
London, United Kingdom
16.1. INTRODUCTION
The strength of nuclear medicine lies in using the tracer method to acquire
information about how an organ is or is not functioning as it should. This
modality, therefore, focuses on physiological organ function for diagnoses and
not on anatomical information such as X ray computed tomography (CT) or
magnetic resonance imaging.
The three aspects involved in the process are: (i) choice of radioactive
tracer, (ii) method of detection of the emissions from the tracer, and (iii) analysis
of the results of the detection. The radioactive tracers on which nuclear medicine
(or molecular imaging as it is increasingly being called) is based are designed
to participate in or trace a chosen function of the body. Their distribution is
then found by detecting and locating the emissions, usually photons, of the
radioactive tracer. The tracer may be involved in a metabolic process, such as
iodine in the thyroid, or it may take part in a physiological process because of its
physical make-up, such as macroaggregate of albumin (MAA) in the lungs.
A number of methods of detection can be used. One is imaging with a gamma
camera or positron emission tomography (PET) scanner in a number of modes:
static (showing an accumulated or integrated function), dynamic (showing the
variation of the function with time), whole body and tomographic (single photon
emission computed tomography (SPECT) and PET analysis). Another is simple
counting over areas of the body which can also be static or dynamic. Yet another
is through laboratory analysis of blood samples. Imaging often provides a rough
anatomical distribution of the function but, more importantly, a quantitative idea
of the extent of the function in the whole functional unit or in component parts
such as the right and left kidney. The anatomical picture has little of the detail
of the other modalities but may be a more direct tool in assessing pathology
since it provides primary information rather than displaying the anatomical
consequences of pathology such as changes in density. The images created can
587
CHAPTER 16
depend on the assessment of plasma clearance with time as seen with blood
sampling of a tracer that is handled exclusively by glomerular filtration and does
not enter blood cells. The most common radiopharmaceutical used is 51Cr-EDTA,
though 99mTc-DTPA and 125I-iodothalamate are also seen.
GFR is obtained by constructing the clearance curve from one or a series of
timed measurements of plasma activity. In the multi-sample method, the expected
multi-exponential curve is defined accurately with samples taken at 10, 20 and
30min, and 2, 3 and 4 h, or approximated with samples taken at about 2 and 4h
or even at only one time point between 2 and 4 h. As taking multiple samples
over a period of hours may be impractical, further simplification of the process to
the taking of a single sample is attractive. An empirical relationship between the
apparent volume of distribution and the GFR has been derived and validated to
allow this at a less precise but acceptable accuracy.
The object of the measurements is to construct the total area under the
plasma clearance curve. It is sufficient for accuracy to assume a bi-exponential
curve with a fast and slow component between times of 10min and 4 h, ignoring
any initial very fast components. The zero time intercepts and rate constants for
the fast and slow components are C10 and , and C20 and , respectively. The
equation for GFR is:
Gfr =
injected activity
Q
Q0
(16.1)
= 0=
total area under plasma clearance curve A C10 + C20
where
injected activity has units of megabecquerels (MBq);
C10 and C20 are in count rates that are converted into megabecquerels per millilitre
(MBq/mL);
and and have units ofmin1, so that the GFR has units of millilitres perminute
(mL/min).
As the contribution to the whole area from the fast component is relatively
small and can be approximated without too much loss of accuracy, the equation
can be simplified to:
GFR =
Q0
(16.2)
C 20
589
CHAPTER 16
This produces an estimate of GFR that is obviously too small, though with
poor renal function the approximation has less of an effect. A correction factor
to convert the approximate GFR to the true GFR can be used. Although this
correction factor depends on the renal function, a figure of 1.15 can be used in
most cases.
GFR will vary with body size and is conventionally normalized to a
standard body surface area of 1.73 m2, though other normalization variables
such as the extracellular fluid volume have been suggested. The calculation for
GFR requires measurement of the activity injected into the patient as well as the
activity in the post-injection syringe, in a standard and in the blood samples. A
number of methods are available in practice. These are based on the difference
in weights of the pre- and post-injection syringe or on measurement of a fixed
volume in a dose calibrator or on known dilutions. The counts recorded by the
well counter measuring the small activities in the blood samples also have to be
calibrated in terms of megabecquerels per count rate (MBq/count rate).
16.2.1.3. Effective renal plasma flow measurements
Renal plasma flow, often referred to as renal blood flow, has been
investigated in the past using 131I or 123I labelled ortho-iodohippurate (hippuran)
or para-amino hippurate (PAH).
Hippuran is almost completely excreted by the renal tubules and extracted
on its first pass through the renal capillary system. As the extraction is not
quite 100%, the renal function measured is called the effective renal plasma
flow (ERPF). A modern variation is to use the 99mTc labelled tubular agent MAG3
as the 99mTc label is more available than 123I. However, the extraction fraction
of MAG3 at below 50% is inferior to that of hippuran, so the measurements of
ERPF are simply estimates.
The ERPF measurement is very much the same as that of GFR in that a
known activity of the radiopharmaceutical is injected and blood samples are
taken at intervals. The timing of the intervals is, however, earlier than for GFR
and occurs at 5min intervals and then at 30, 50, 70 and 90min. The resulting
two-exponential timeactivity curve is plotted from which the function is given
as:
ERPF =
C 10
Q0
+
C 20 (16.3)
590
16.2.2.
14
C breath tests
The 14C urea breath test is used for detecting Helicobacter pylori infection
in cases of, for example, patients with duodenal and other ulcers following and
monitoring anti-H. pylori treatment. The test is based on the finding that the
bacterium H. pylori produces the enzyme urease in the stomach. As urease is not
normally found in that organ, its occurrence can, therefore, denote the existence
of H. pylori infection. The activity of 14C used in the test is very small, about
37 kBq, and the effective dose for this is also low, at less than 3 Sv.
To carry out the test, 14C urea is administered orally in the form of a
capsule. The urease in the stomach converts the urea to ammonia and 14C
carbon dioxide which is exhaled and can be detected in breath samples using a
liquid scintillation counter. The counter can measureminute quantities of the
emitting 14C. One or two samples are usually collected after 1030min using a
straw and balloon technique. Also counted are a known standard sample and a
representative background sample. Counting is done either locally or by sending
the samples to a specialized laboratory. The net disintegrations perminute (dpm)
registered by the counter are compared with standard values to assess the degree
of infection. dpm is given by:
dpm =
(S B)S t
(16.4)
(S t B)
where
S
B
CHAPTER 16
by, for example, the IAEA) consists of a small source of known activity in a
plastic cylinder offering the same attenuation as a neck. This acts as the standard.
Counts are obtained at a distance of about 25cm from the collimator face to
offset any inverse square errors due to different locations of the thyroid. The
percentage uptake U is then calculated from the formula:
U=
N T
100 (16.5)
Ca
where
N
T
593
CHAPTER 16
transit time when the kidney contains the sum of all of the tracer extracted from
the blood and has, therefore, been termed the sum phase. The second starts at
the end of the first and reflects the net activity left after loss from the kidney and
has been called the difference phase. Other terms such as vascular spike and
secretory phase may not reflect purely renal function and are, therefore, not
very helpful.
FIG.16.1. Dynamic renal flow study after administration of 99mTc-MAG3 (left), regions of
interest drawn over the kidneys and background (top right), and corresponding renogram
curves for the right kidney (RK) and left kidney (LK) (bottom right).
595
CHAPTER 16
596
between the anterior and posterior surfaces of the body. Relying on a simple
anterior view leads to artefacts due to differential attenuation of the 99mTc rays.
The data are analysed by drawing ROIs around the organs of interest
(stomach and parts of the gastrointestinal tract) and creating a decay corrected
timeactivity curve. An assessment of the gastric emptying function is made
from standard values. An alternative way of expressing the result is through the
half-emptying time.
16.3.4.2. Analysis of colonic transit
Colonic transport analysis can be performed using 111In labelled
non-absorbable material, such as DTPA or polystyrene micropellets administered
orally. Indium-111 is chosen rather than the more accessible 99mTc because of its
longer half-life (2.7 d) and the possibility of imaging over a longer time since
images are taken at, for example, 6, 24, 48 and 72 h.
As with the stomach procedures, a geometric mean parametric image of
anterior and posterior views may be used in the quantification, if this facility
is available in the nuclear medicine software. A geometric centre of the activity
(also called centre of mass) may be tracked over time by defining particular
segments in the colon, perhaps 511 in number (e.g the ascending, transverse,
descending, rectosigmoid and excreted stool), multiplying the individual segment
counts by weighting factors from 1 to 5, respectively, and summing the resulting
numbers. In addition to timeactivity curves for the individual segments, the
rate of movement of the geometric centre as a function of time can be assessed
by plotting this as a time position profile. A colonic half-clearance time may
be calculated and compared with historical normal control values of colonic
transport.
16.3.4.3. Oesophageal transit
This function is studied by imaging the transit of a bolus of radiolabelled
non-metabolized material such as 99mTc sulphur colloid. Either the whole
oesophagus may be included in an ROI and a timeactivity curve generated
for the whole organ, or a special display may be generated, whereby the counts
in successive regions of the oesophagus are displayed on a 2D spacetime
map called a condensed image. The counts in the regions are displayed in the
y direction as colour or grey scale intensities corresponding to the count rate
against time along the x axis (Fig.16.2). The result is a pictorial idea of the
movement of the bolus down the oesophagus.
597
CHAPTER 16
Bolus of
activity
travelling
down
oesophagus
imaged at
successive
times
Time (s)
Spacetime matrix
Distance
down
oesophagus
Time (s)
FIG.16.2. Oesophageal transit is imaged as a spacetime matrix. As the bolus of radioactivity
passes down the oesophagus, the counts from successive regions of interest, represented on a
grey scale, are placed in consecutive positions in the matrix in the appropriate time slot. A
normal transit will be shown as a movement of the bolus down and to the right in the matrix.
Retrograde peristalsis will be shown as a movement to the right and upwards.
bile ducts. The gall bladder is then made to contract and empty by injecting a
hormone called cholecystokinin and the imaging of the gall bladder continued,
the whole test taking between 1 and 2 h. The amount of the radiolabel that
leaves the gall bladder is assessed by the difference in counts in the ROI over
the emptied gall bladder divided by the counts from the ROI over the full gall
bladder. Expressed as a percentage, this gives the ejection fraction. An ejection
fraction above 50% is considered as normal and an ejection fraction below about
35% as abnormal, suggesting, for example, chronic acalculous cholecystitis.
16.3.5. Cardiac function
16.3.5.1. General discussion
The two main classes of cardiac function are blood flow in the myocardium
and in the blood pool and ventricles. Images are acquired in both planar and
tomographic modes, and the data may be acquired dynamically over sequential
time periods or as a gated study triggered by the electrocardiogram (ECG), or
as part of a first-pass study. The information is presented on a global or regional
basis as conventional or parametric images, or as curves from which quantitative
parameters may be derived. A range of pharmaceutical agents labelled with single
and positron emitting isotopes are used.
Cardiac functions that may be investigated with nuclear medicine techniques
run into many dozens, though relatively few are covered by standard clinical
practice and some are confined to research. A list of cardiac functions would,
therefore, include myocardial perfusion, myocardial metabolism of glucose and
fatty acids, myocardial receptors, left ventricular ejection fraction, first-pass
shunt analysis, wall motion and thickness, stroke volumes, cardiac output and its
fractionation, circulation times, systolic emptying rate, diastolic filling rate, time
to peak filling or emptying rate, and regional oxygen utilization.
Despite the usefulness of nuclear cardiology procedures, a worldwide
survey has shown a wide variation in their use and availability. There is a high
application rate in the United States of America (where cardiology accounts for
about half of nuclear medicine procedures) and Canada, less in western Europe
and Japan, and low application elsewhere such as in the Russian Federation,
Asia and some parts of South America. One reason for this pattern of use may
be the degree of access to training for physicians. Another reason may be that
gated SPECT imaging and analysis requires a high level of instrumentation and
software.
However, the procedures that can be carried out in any particular
department will depend very much on the nuclear medicine software provided by
the supplier of the gamma camera or from a specialized nuclear software supplier.
599
CHAPTER 16
Multiple-gated
count rate
EDV ESV
First-pass
count rate
Time (s)
Time (s)
FIG.16.3. Curves obtained from (a) first-pass angiography and (b) a multiple-gated
acquisition study, showing how end diastolic volume (EDV) and end systolic volume (ESV),
used to determine ejection fraction, are represented in each technique.
CHAPTER 16
602
EDV ESV
(16.6)
EDV
Beat by beat
images
Combined gated
images
FIG.16.4. The multiple-gated acquisition scan. The sequence of these gated images shows the
heart cycle with higher counts and better statistics than the individual images, allowing better
interpretation of the data than in Fig.16.3(b).
CHAPTER 16
analysis may be compared with similar ones obtained with the patient at rest.
A number of protocols involving different times of examination and different
administrations of radioactivity have been devised to carry out the stress/rest
examinations in one rather than two days, given the potential long washout
periods involved. Imaging can be performed early (at about 15min) following
injection of 201Tl or 99mTc-sestamibi at rest or after the stress test and/or delayed
(after 14h or after 24 h) after injection at rest or under stress of the longer
lived 201Tl. These protocols give rise to different types of image. In general, the
imaging properties of 99mTc give superior images, though 201Tl is superior from a
physiological viewpoint as it is a better potassium analogue.
Conventional cardiac SPECT imaging may be carried out with a single or
double headed gamma camera using circular or elliptical orbits, the latter allowing
closer passes over the patient and, consequently, better resolution. Attenuation
correction may be performed on the emission images using an isotope or CT
X ray source. However, there is still debate as to the usefulness of attenuation
correction since this technique may give rise to artefacts due to mismatch of the
emission and transmission images on the fused images.
16.3.5.5. Technical aspects of SPECT and PET imaging
Thallium is not an ideal gamma camera imaging radionuclide, as it emits
rather low energy characteristic X rays between 69 and 80keV that are easily
attenuated (and, therefore, lost) in the body. The attenuation, by breasts and the
chest wall, varies for the different projections around the body and gives rise to
artefacts in the perfusion images if not corrected. The higher energy of 99mTc
(140keV) while still liable to attenuation, allows better collection of data from
the heart and less variation in the attenuation. Although SPECT/CT has still not
become a viable option (as has PET/CT where the two modalities have become
inseparable), this would be a better option for attenuation correction than the
rather cumbersome isotope attenuation correction devices that have been used in
the past.
Scattering of the photons before detection in the camera also leads to
problems in that their origin might be misplaced and loss from deeper structures
occurs. Recently, software to reduce the effects of scattering by modelling its
behaviour within the field of view has become available. Another source of
degradation of the image quality is the loss of resolution with distance from the
collimator face. Although high resolution collimators are usually chosen for
99m
Tc imaging, the basic resolution of the camera at the level of the heart is rather
poor. Again, software techniques to model this behaviour and correct for it have
become available.
604
Gamma camera images, unlike X ray ones, are always subject to lack of
counts and are, therefore, prone to statistical errors. Using a double headed rather
than a single headed system is, therefore, an advantage. There is still discussion
on the best angle between the heads and this may vary between less than 90 and
180. Scanning the patient with the collimator as close to the source of activity as
possible also ensures the best resolution, so a non-circular orbit is usually chosen.
Owing to the lack of accessible counts with 201Tl (its relatively long half-life
and high effective dose reducing the activity that can be administered), a general
purpose collimator is used, which is more efficient but less accurate than the high
resolution collimator used with 99mTc.
PET imaging based on the simultaneous detection of opposed annihilation
radiation from the original positron radiotracer is intrinsically tomographic
and invariably comes with anatomical land marking and attenuation correction
from the CT. It is also more sensitive and more accurate (because of its better
resolution) than SPECT and uptake of the radiopharmaceuticals can be
quantified absolutely. In theory, the use of 13N labelled ammonia and 18F-FDG
can differentiate more about the state of the myocardium, its blood flow and
metabolism, than the SPECT tracers.
Processing of the data starts with filtered back projection of the data using
standard filters such as a Butterworth or Hanning filter with appropriate cut-off
spatial frequencies and order, or an iterative reconstruction may be performed.
Attenuation correction may be applied. This stage produces a set of contiguous
slices parallel to the transverse axis of the patient which can be combined to
populate a 3D matrix of data. As the heart lies at an angle to this body axis,
a process of reorientation is performed. From the original matrix, the data that
lay parallel to the axes of the heart itself can be selected to form vertical long
axis (parallel to the long axis of the left ventricle and perpendicular to the
septum), horizontal long axis (parallel to the long axis of the left ventricle and
to the septum) and short axis (perpendicular to the long axis of the left ventricle)
slices through the myocardium of the particular patient, as shown in Figs16.5
and 14.12.
Cardiac processing software, working on features extracted from the shape
of the myocardium, allows easy automatic alignment which may also be operator
guided. The reoriented sections form three sets of images that are displayed in a
standard format to show, for example, the apex and heart surfaces at each stage of
gating of the heart cycle (Fig.16.4). The display may take several forms including
the simultaneous display of many sections in each of the three axes both at rest
and after stress, or as a moving image of the beating heart. The algorithms used
to carry out this process differ with the provider of the computer software and are
recognized under specific names.
605
CHAPTER 16
the commercial products available. This results in different looking maps that,
although individually validated, are not directly comparable. It would, therefore,
be prudent for one software package to be standardized at any one reporting
centre. The results from a particular study can be compared with a reference
image derived from a so-called normal database to allow a better estimation of
the extent of the defects. However, it is often difficult to match the population
being examined with the available validated normal data, for example, in terms
of gender and ethnicity.
BIBLIOGRAPHY
PETERS, A.M., MYERS, M.J., Physiological Measurements with Radionuclides in Clinical
Practice, Oxford University Press, Oxford (1998).
ZIESSMAN, H.A., OMALLEY, J.P., THRALL, J.H., Nuclear Medicine The Requisites,
3rd edn, Mosby-Elsevier (2006).
FURTHER READING
Recommended methods for investigating many of these functions
may be found on the web sites of the American Society of Nuclear
Medicine (www.snm.org), the British Nuclear Medicine Society
(www.bnms.org.uk) and the International Committee for Standardization in
Haematology (http://www.islh.org/web/published-standards.php).
607
CHAPTER 17
QUANTITATIVE NUCLEAR MEDICINE
J. OUYANG, G. EL FAKHRI
Massachusetts General Hospital
and Harvard Medical School,
Boston, United States of America
Planar imaging is still used in clinical practice although tomographic
imaging (single photon emission computed tomography (SPECT) and positron
emission tomography (PET)) is becoming more established. In this chapter,
quantitative methods for both imaging techniques are presented. Planar imaging
is limited to single photon. For both SPECT and PET, the focus is on the
quantitative methods that can be applied to reconstructed images.
17.1. PLANAR WHOLE BODY BIODISTRIBUTION MEASUREMENTS
Planar whole body imaging is almost always carried out by translating the
patient and bed in the z direction between opposed heads of a dual head standard
scintillation camera, typically in the anterior and posterior positions. The resulting
images are 2D projections of the 3D object being studied. The attenuation
of photons varies with the distance and the material the photons have to travel
through the object before reaching the detector. One approach to compensate for
attenuation in planar imaging is to perform conjugate counting with the geometric
mean, which consists of acquiring data from opposite views and combining them
into a single dataset. Figure17.1 shows an imaging object with uniform attenuation
viewed by two gamma detectors placed in opposite directions. A point source in
the object has attenuation depth d 1 and d 2 to detectors 1 and 2, respectively. The
projected counts P1 and P2 measured on detectors 1 and 2, respectively, are:
P1 = I 0 exp(d 1 ), P2 = I 0 exp(d 2 ) (17.1)
where
I 0
is the unattenuated number of counts that would be detected (in the
absence of attenuation);
These two conjugate views are combined using the geometric mean PG
defined as:
PG = P1P2 = I 0 exp(D / 2) (17.2)
FIG.17.1. Illustration of conjugate viewing with the geometric mean for attenuation
compensation.
CHAPTER 17
although time consuming way to define an ROI is to draw the boundary of the
volume of interest on every slice containing the organ or tumour of interest. It is
difficult and time consuming to manually draw the boundary of an ROI because
the activity profile at the edge of the area of interest is not abrupt, but changes
slowly, and, therefore, deciding on a threshold to determine such a boundary is
not always straightforward. The manually drawn ROIs are, therefore, generally
not reproducible. Alternative semi-automatic and automatic methods use edge
detection techniques, which include count threshold, isocountours, maximum
slope or maximum count gradient, to improve reproducibility. Finally, another
approach that has been successfully used to determine organs or volumes with
a specific timeactivity behaviour in a dynamic acquisition is factor analysis of
dynamic sequences [17.2, 17.3].
17.2.2. Use of standard
When performing quantification from projections in the clinic, it can be
helpful to image a standard activity (known measured activity) along with the
patient (i.e. in the same projection). The standard (usually a small flask of the
radiotracer) provides a conversion between radioactivity concentration (MBq/mL)
and counts in the projections. It should be noted that the use of standards does not
guarantee accurate absolute quantification because the standard activity is not
affected by scatter, attenuation and partial volume effects in the same way as the
activity distribution in the patient.
17.2.3. Partial volume effect and the recovery coefficient
Ideally, the activity intensity within a region in a reconstructed image should
be proportional to the actual activity level in the region if scatter, attenuation,
randoms (PET only) and dead time corrections are properly applied, and if it is
assumed that there is very little noise. However, the partial volume effect (PVE)
significantly affects the quantification based on the size of the object of interest.
The PVE includes two different phenomena. One is the image blurring effect
caused by the finite spatial resolution. The blurring results in spillover between
regions. The image of a hot region, such as a tumour, appears larger and dimmer.
This limited resolution effect is theoretically described by a convolution operation.
The other PVE phenomenon is the so-called tissue fraction effect caused by the
fact that the boundaries of certain voxels do not match the underlying activity
distribution. The net PVE effect is the reduced contrast between the object and
the surrounding areas, as well as the reduced absolute uptake in a hot region. For
tumour imaging, the PVE can affect both the tumour apparent uptake and tumour
apparent size.
610
Recovery coefficient
The PVE is dependent on the size and shape of the region, the activity
distribution in the surrounding background, image spatial resolution, pixel size
and how uptake value is measured. The PVE correction is complicated by the
fact that not only activity from inside the region spills out but also activity from
outside the region spills in. As these two activity dependent flows are not usually
balanced, it is difficult to predict the overall PVE effect.
FWHM
FWHM
FWHM
FWHM
FWHM
CHAPTER 17
uptake). The RC correction method is a very simple method commonly used for
PVE correction in nuclear medicine [17.4]. However, the RC method can only be
used to correct the spillover between two structures. Geometric transfer matrix is
an approach that can account for the spillover among any number of structures
[17.5]. Deconvolution is another approach to perform PVE correction without
any assumption regarding tumour size, shape, homogeneity or background
activity. These three correction methods are used to correct the uptake value in a
region. It is also possible to model PVE in image reconstruction to obtain PVE
corrected images.
17.2.4. Quantitative assessment
Reconstructed images can be assessed qualitatively and quantitatively.
Qualitative interpretation is based on visualization that identifies regions with
abnormal patterns of uptake of the injected radiopharmaceutical as compared to
the known variants of radiotracer distribution. Quantitative assessment can be
either relative or absolute:
Target to background contrast: The target to background contrast is the ratio
between the concentration within the target region and the concentration
within the surrounding background. Therefore, contrast is considered a
relative quantification metric.
Radiotracer concentration: The radiotracer concentration (Bq/mL) is the
amount of radioactivity per unit volume within a defined ROI. Sometimes,
radiotracer concentration is converted into a different metric. For example,
the standardized uptake value (SUV) is the radiotracer concentration
normalized by injected dose and patient weight, and is mainly used to assess
tumour glucose utilization for fluorodeoxyglucose (FDG) PET. Therefore,
SUV is a semi-quantitative metric.
Kinetic parameters: A time sequence of PET measurements, i.e. dynamic
quantitative PET, makes it possible to measure tracer kinetics that describe
the interaction between the tracer and physiological processes. For example,
a water based tracer can be used to measure blood flow; a glucose based
tracer, such as FDG, can be used to measure metabolic rate. This is the
most accurate and absolute quantification metric that can be derived from
PET measurements. Usually, absolute quantification is best achieved with
PET because all projections are acquired simultaneously and, therefore,
dynamic imaging can be more easily performed than with rotating SPECT
cameras.
612
C T C B
(17.3)
CB
where C T and C b are the mean concentration within the defined target and
background region, respectively.
17.2.4.2. Relative quantification using the standardized uptake value
The SUV [17.6] is a widely used semi-quantitative metric to measure
tumour glucose utilization in, mostly, PET imaging. SUV (g/mL) is defined as:
SUV =
C i (kBq / mL)
(17.4)
A (kBq) / W (g)
where
Ci is either the mean (for SUVmean) or maximum (for SUVmax) decay-corrected
activity concentration (kBq/mL) within the defined region in the image (or
tissue);
A is the injected activity (kBq);
and W is the patient weight (g).
It is normally assumed that the density of tissue is equivalent to 1.0 g/mL,
such that the units effectively cancel and the SUV becomes a dimensionless
measure. The primary use of the SUV is to quantify activity in an ROI independent
of administered activity and patient weight. It has been shown that the SUV may
correlate with the metabolic rate of FDG in different tumour types, especially
when normalized for plasma glucose levels [17.7]. SUVmax instead of SUVmean
is often used because it is less sensitive to PVEs and it avoids including necrotic
or other non-tumour elements. However, SUVmax has lower reproducibility and a
larger bias than SUVmean as it is computed over a smaller number of voxels [17.8].
613
CHAPTER 17
To address this issue, a further term has been introduced, SUVpeak, which is
defined as the mean SUV value in a group of voxels surrounding the voxel with
the highest activity concentration in the tissue. The SUVs will be affected by the
level of image noise, which is affected by the reconstructed activity concentration,
the parameters chosen in the reconstruction algorithm and a myriad of other
factors. SUVpeak is intended to be a more robust measure than SUVmax.
The primary drawback with the SUV is that it is affected by many sources
of variability [17.9]. In addition to mathematical factors (e.g. ROI, noise and
PVE) that can alter the accuracy of the SUV, there are a number of biological
factors that can variably and unpredictably impact the SUV. Firstly, the SUV
calculation is based on the total administered dose. If a portion of the radiotracer
becomes interstitially infiltrated during intravenous injection, it is not routine
practice to correct for the activity that is trapped at the injection site and,
therefore, failing to circulate through the body. As a result, the calculated SUV
can be artificially low, because the total administered dose used in calculating
the SUV is greater than the actual dose reaching its intended intravascular target.
Secondly, the glucose avidity of tissues in the body is dependent on numerous
factors such as the presence of diabetes, insulin level and glucose level (the latter
two fluctuate widely depending on the patients most recent meal). If a patient
is diabetic, glucose is metabolized poorly by normal tissues, therefore leaving
more glucose available in the bloodstream to be metabolized by abnormally
glycolytic tissues such as tumour and infection, theoretically resulting in an
artificially elevated SUV. Conversely, if a diabetic patient has just received a
dose of insulin, the opposite effect may occur, lowering the SUV. In non-diabetic
patients, a patient who has recently eaten will have high glucose and insulin
levels, with a similar effect of lowering the SUV. Thirdly, FDG is cleared from
the body through urinary excretion. Patients with impaired renal function will
extract FDG more slowly from the bloodstream, leaving more FDG available
for metabolism by both normal and hypermetabolic tissues. Finally, body mass
is also a parameter used in calculating the SUV. In patients with a large number
of ascites, or other significant 3rd-spacing processes, the body mass of the
patient will be elevated by the presence of fluid that is neither intravascular nor
capable of uptake of radiotracer. Therefore, the denominator used in calculating
the SUV will be artificially large due to overestimation of the size of the patient,
theoretically causing an artificially low SUV to be calculated. Another factor
that plays a key role in standardization of tumour uptake is the reproducibility of
the measurement, and it has been previously shown that the correlation between
uptake measurements made in an identical manner at different clinical sites was
relatively low and significantly different to the standard reference measurement.
It is not uncommon that the SUV can vary 50% because of one or more such
effects. Therefore, the SUV should be properly corrected for all of these effects.
614
where V is the tumour volume (mL) that can be obtained using 3D contour
software.
TLG provides a measurement of the total FDG uptake in the tumour region,
which reflects total rather than average tumour metabolism.
17.2.4.3. Absolute quantification using kinetic modelling
Dynamic imaging consists of acquiring data as a series of time frames
that capture the timeactivity curve in each voxel over time, making it possible
to quantify tracer kinetics in vivo. Using a given radiotracer, the interaction of
the radiotracer with the bodys physiological, biochemical or pharmocokinetic
processes can be monitored. For example, glucose metabolism can be assessed
by FDG, a glucose analogue radiotracer. With an understanding of the underlying
physiological factors that control the tissue radioactivity levels, mathematical
models, known as kinetic models, can be constructed with one or more parameters
that describe the distribution of radiotracers as a function of time in the body
and fit the timeactivity curves in each voxel in the organ of interest. Kinetic
models used in nuclear medicine are based on compartments within a volume
or space within which the radiotracer becomes uniformly distributed almost
instantly, i.e. contains no significant concentration gradients. In other words,
compartmental modelling describes systems that vary in time but not in space.
More complicated kinetic models that include spatial gradients are generally not
applicable to nuclear medicine because of limited spatial resolution.
For a single-tissue compartmental model illustrated in Fig.17.3, the rate of
change in tracer concentration in a tissue is:
d C t (t )
= K 1C a (t ) k 2C t (17.6)
dt
where
Ct(t) is the tracer concentration in the tissue;
Ca(t) is the tracer concentration in the blood;
and K1 and k2 are the first order rate constants for the flux into the tissue and out
of the tissue, respectively.
615
CHAPTER 17
FIG.17.3. Single-tissue compartmental model that describes the tracer exchange between
blood and tissue.
616
The bias is mainly due to faulty measuring devices or procedures. One common
bias measure is defined as:
BIAS =
1
N
N
i=1
( x i t ) (17.8)
where
N
xi
1
N
N
i=1
( x i x ) 2 (17.9)
where
x=
1
N
x
i=1 i
(17.10)
1
N
N
i=1
( x i t ) 2 (17.11)
617
CHAPTER 17
To have the same scale as the mean value, precision is also quantified by
standard deviation, defined as the square root of variance. Similarly, accuracy is
also quantified as root MSE defined as the square root of MSE.
17.2.6. Evaluation of image quality
Image quality is a concept that has received a lot of attention recently
in an effort to better define what image quality is. A good surrogate for image
quality is image utility, i.e. the usefulness of an image for a particular detection or
quantification task, rather than measures of image properties, such as resolution,
contrast or stationarity of the point spread function, or of image fidelity, such
as normalized MSE. Measures of quantitative accuracy, precision and root MSE
are, of course, very useful when first assessing a new system or a quantification
method; however, for more rigorous evaluation or for definitive optimization
of data acquisition strategies, reconstruction techniques or image processing
procedures, it is recommended to carry out an objective assessment of image
quality based on detection or quantification tasks. Performance metrics for task
based estimation or detection tasks can be viewed as measures of image utility
which are the most clinically relevant bases on which to evaluate or optimize
imaging systems.
The most conclusive assessment of image quality is based on
human-observer studies. However, such studies are not routinely performed
clinically because they are time and resource consuming. Instead, a numerical
(or mathematical) observer is often used. It is beyond the scope of this chapter
to detail the different numerical observers used in SPECT and PET (for a review,
see Refs [17.13, 17.14]).
One of the simplest numerical observer methods is the non-prewhitening
matched technique, which is the optimal observer when images have uncorrelated
noise. Assuming N noise realizations of target-present image S and N noise
realizations of target-absent image B, the non-prewhitening signal to noise ratio
(SNRNPW) can be calculated as:
SNR NPW =
S T BT
1 / 2[ 2 ( S T )+ 2 ( BT )]
618
(17.13)
619
CHAPTER 17
620
CHAPTER 18
INTERNAL DOSIMETRY
C. HINDORF
Department of Radiation Physics,
Skne University Hospital,
Lund, Sweden
18.1. THE MEDICAL INTERNAL RADIATION DOSE FORMALISM
18.1.1. Basic concepts
The Committee on Medical Internal Radiation Dose (MIRD) is a
committee within the Society of Nuclear Medicine. The MIRD Committee was
formed in 1965 with the mission to standardize internal dosimetry calculations,
improve the published emission data for radionuclides and enhance the data on
pharmacokinetics for radiopharmaceuticals [18.1]. A unified approach to internal
dosimetry was published by the MIRD Committee in 1968, MIRD Pamphlet
No. 1 [18.2], which was updated several times thereafter. Currently, the most
well known version is the MIRD Primer from 1991 [18.3]. The latest publication
on the formalism was published in 2009 in MIRD Pamphlet No. 21 [18.4], which
provides a notation meant to bridge the differences in the formalism used by the
MIRD Committee and the International Commission on Radiological Protection
(ICRP) [18.5]. The formalism presented in MIRD Pamphlet No. 21 [18.4] will
be used here, although some references to the quantities and parameters used
in the MIRD primer [18.3] will be made. All symbols, quantities and units are
presented in Tables 18.1 and 18.2.
The MIRD formalism gives a framework for the calculation of the absorbed
dose to a certain region, called the target region, from activity in a source region.
The absorbed dose D is calculated as the product between the time-integrated
and the S value:
activity A
S (18.1)
D= A
The International System of Units unit of absorbed dose is the joule per kilogram
(J/kg), with the special name gray (Gy) (1 J/kg = 1 Gy).
621
CHAPTER 18
The time-integrated activity equals the number of decays that take place in a
certain source region, with units Bq s, while the S value denotes the absorbed
dose rate per unit activity, expressed in Gy (Bq s)1 or as a multiple thereof,
for example, in mGy (MBq s)1. The time-integrated activity was named the
cumulated activity in the MIRD Primer [18.3] and the absorbed dose rate per unit
activity was named the absorbed dose per cumulated activity (or the absorbed
dose per decay). A source or a target region can be any well defined volume,
for example, the whole body, an organ/tissue, a voxel, a cell or a subcellular
structure. The source region is denoted rS and the target region rT:
(r ) S(r r ) (18.2)
D(rT ) = A
S
T
S
The number of decays in the source region, denoted the time-integrated activity,
is calculated as the area under the curve that describes the activity as a function
of time in the source region after the administration of the radiopharmaceutical
(A(rS, t)). The activity in a region as a function of time is commonly determined
from consecutive quantitative imaging sessions, but it could also be assessed via
direct measurements of the activity on a tissue biopsy, a blood sample or via
single probe measurements of the activity in the whole body. Compartmental
modelling is a theoretical method that can be used to predict the activity in a
source region in which measurements are impossible.
(r ) =
A
S
A(r , t) dt (18.3)
S
The time-integration period TD, for which the time-integrated activity in the
source region is determined, is commonly chosen from the time of administration
of the radiopharmaceutical until infinite time, e.g. 0 to (Eq. (18.4)). However,
the integration period should be matched to the biological end point studied
in combination with the time period in which the relevant absorbed dose is
delivered.
(r ,T ) =
A
S D
TD
A(r , t) dt
S
(18.4)
INTERNAL DOSIMETRY
concept. The area under the curve describing the activity as a function of time
TD
A(r , t) dt = a(r ) A
S
activity coefficient can be described as an average time that the activity spends in
a source region.
a(rS ) =
(r )
A
S
(18.5)
A0
The S value is defined according to Eq. (18.6), which includes the energy emitted
E, the probability Y for radiation with energy E to be emitted, the absorbed
fraction f and the mass of the target region M(rT). The absorbed fraction
is defined as the fraction of the energy emitted from the source region that is
absorbed in the target region and equals a value between 0 and 1. The absorbed
fraction is dependent on the shape, size and mass of the source and target regions,
the distance and type of material between the source and the target regions, the
type of radiation emitted from the source and the energy of the radiation:
EY
(18.6)
M (rT )
Activity
S=
Time
FIG.18.1. The time-integrated activity coefficient (the residence time in the MIRD
Primer [18.3]) is calculated as the time-integrated activity divided by the injected activity,
which gives an average time the activity spends in the source region.
623
CHAPTER 18
The product of the energy emitted E and its probability to be emitted Y is denoted
, the mean energy emitted per decay of the radionuclide. The full formalism
also includes a summation over all of the transitions i per decay:
S(rT rS ) =
i (rT rS , i)
(18.7)
M (rT )
The absorbed fraction divided by the mass of the target region is named the
specific absorbed fraction :
(rT rS , E i ) =
(rT rS , E i )
(18.8)
M (rT )
The mass of both the source and target regions can vary in time, which means that
the absorbed fraction will change as a function of time after the administration,
and the full time dependent version of the internal dosimetry nomenclature
must be applied (Eq. (18.9)). This phenomenon has been noted in the clinic
for tumours, the thyroid and lymph nodes, and can significantly influence the
magnitude of the absorbed dose.
(rT rS , E i , t ) =
(rT rS , E i , t )
M (rT , t )
(18.9)
The total mean absorbed dose to the target region D(rT) is given by summing the
separate contributions from each source region rS (Eq. (18.10)). The self-absorbed
dose commonly gives the largest fractional contribution to the total absorbed
dose in a target region. The self-absorbed dose refers to when the source and
target regions are identical, while the cross-absorbed dose refers to the case in
which the source and the target regions are different from each other.
D(rT ) =
A (r )S(r
S
rS ) (18.10)
rS
The full time dependent version of the MIRD formalism can be found in
Eq. (18.11), where D denotes the absorbed dose rate:
TD
D(rT ,TD ) =
rS
624
TD
A(r , t)S(r
(r , t ) d t =
D
T
rS
rS , t ) dt (18.11)
INTERNAL DOSIMETRY
Parameter
Type of radiation
rS
Source region
rT
Target region
TD
Integration period
Quantity
Unit
(r ,T )
A
S D
Time-integrated activity
Bq s
a(rS ,TD )
D(rT)
Gy
D
Gy/s
Ei
J or MeV
M(rT, t)
kg
S(rT rS , t )
Time
Yi
(Bq s)1
(rT rS , E i , t )
Absorbed fraction
Dimensionless
(rT rS , E i , t )
kg1
625
CHAPTER 18
A e
t ( + j )
(18.12)
The decay constant equals the natural logarithm of 2 (ln 2 = 0.693) divided by
the half-life. The decay constant in an exponential function matches the slope of
the curve it describes (in a linear-log plot of the function).
=
ln 2
T1/2
(18.13)
The physical half-life T1/2 and the biological half-life T1/2, j can be combined into
an effective half-life T1/2,eff according to Eq. (18.14). The effective half-life is
always shorter than both the biological and the physical half-lives alone.
1
1
1
(18.14)
=
+
T1/2,eff T1/2, j T1/2
The cumulated activity for the relevant time period is commonly calculated
as the time integral of an exponential function (Eq. (18.15)). However, other
functions could be used, with trapezoidal or Riemann integration (Fig. 18.2). The
trapezoidal and the Riemann methods could be reproduced with a higher accuracy
than the integration of an exponential, depending on how well the exponential fit
could be performed.
=
A
A(r ,0) e
S
626
t ( + j )
dt =
A(rS ,0)
(18.15)
+j
INTERNAL DOSIMETRY
(a)
(b)
(c)
12
24
36
48
60
Time
FIG.18.3. Two examples of the possible influences of curve fitting caused by the number
and the timing of activity measurements. The solid line gives the real activity versus time; the
dotted line represents the exponential curve fitted to the measurements, which are shown as
black dots.
The extrapolation from time zero to the first measurement of the activity in the
source region, and the extrapolation from the last measurement of the activity
in the source region to infinity, can also strongly influence the accuracy in the
time-integrated activity (see Fig. 18.4).
627
Activity
CHAPtER 18
Time
FIG. 18.4. Extrapolation before the first and after the last measurement point.
INTERNAL DOSIMETRY
M (rT ,tabulated)
M (rT ,scaled)
(18.16)
Absorbed fraction
Absorbed fraction
This is a useful method to adjust the S value found in a table to the true
weight of the target region. When scaling an S value, the absorbed fraction is
considered to be constant in the interval of scaling. The change in the S value is
then set equal to the change in mass of the target. It should be noted that linear
interpolation should never be performed in S value tables (Fig. 18.6).
FIG. 18.5. Absorbed fraction for unit density spheres as a function of the mass of the spheres
for mono-energetic photons (left) and electrons (right) (data from Ref. [18.7]).
CHAPTER 18
Ssphere (mGy/MBq-s)
The absorbed fractions for photons are relatively constant, so the S value for
penetrating radiation can be scaled by mass.
Mass (g)
FIG.18.6. Linear interpolation in S value tables gives S values that are too large. In this
particular case for a unit density sphere of 131I, linear scaling would give an S value that is
significantly greater than when scaling according to mass is performed.
S = S p + S np = S p +
S p = (S
np
m
np
m
m phantom
S recalculated = (S
mtrue
np
m
(18.17)
(18.18)
m phantom
mtrue
np
m
(18.19)
The absorbed fractions for photons and electrons vary according to the
initial energy and the volume/mass of the target region and, thus, the suitability
of the recalculation will also vary, as was discussed in the previous section
(Fig. 18.5).
The principle of reciprocity means that the S value is approximately the
same for a given combination of source and target regions, i.e. S(rTrS) is equal
to S(rSrT). The reciprocity principle is only truly valid under ideal conditions,
in regions with a uniformly distributed radionuclide within a material that is
either (i) infinite and homogenous or (ii) absorbs the radiation without scatter.
The ideal conditions are not present in the human body, although the reciprocity
principle can be seen in S value tables for human phantoms as the numbers are
almost mirrored along the diagonal axis of the table.
630
INTERNAL DOSIMETRY
material Y
(18.20)
material X
d
dm
(18.21)
The absorbed dose is defined at a point, but it is determined from the mean
specific energy and is, thus, a mean value. This is more obvious from an older
definition of absorbed dose, where it is defined as the limit of the mean specific
energy as the mass approaches zero [18.9]:
D = lim z (18.22)
m 0
631
CHAPTER 18
The dosimetric quantity that considers stochastic effects and is, thus, not
based on mean values, is the specific energy z. The specific energy represents
a stochastic distribution of individual energy deposition events divided by the
mass m in which the energy was deposited [18.10]:
z=
(18.23)
The unit of the specific energy is joules per kilogram and its special name
is gray. Its relevance is especially important in microdosimetry which is the study
of energy deposition spectra within small volumes corresponding to the size of a
cell or cell nucleus.
The energy imparted to a given volume is the sum of all energy deposits i
in the volume:
=
(18.24)
The energy deposit is the fundamental quantity that can be used for the
definition of all other dosimetric quantities. Each energy deposit is the energy
deposited in a single interaction i:
i = in out + Q (18.25)
where
in is the kinetic energy of the incident ionizing particle;
out is the sum of the kinetic energies of all ionizing particles leaving the
interaction;
and Q is the change in the rest energies of the nucleus and of all of the particles
involved in the interaction.
If the rest energy decreases, Q has a positive value and if the rest energy
increases, it has a negative value. The unit of energy imparted and energy
deposited is joules or electronvolts. The summation of the energy deposits to
receive the energy imparted may be performed for one or more events, which is a
term denoting the energy imparted from statistically correlated particles, such as
a proton and its secondary electrons.
The absorbed dose is a macroscopic entity that corresponds to the mean
value of the specific energy per unit mass, but is defined at a point in space.
632
INTERNAL DOSIMETRY
When considering an extended volume such as an organ in the body, then for the
mean absorbed dose to be a true representation of the absorbed dose to the target
volume, either radiation equilibrium or charged particle equilibrium must exist.
Radiation equilibrium means that the energy entering the volume must equal
the energy leaving the volume for both charged and uncharged radiation. The
conditions under which radiation equilibrium are present in a volume containing
a distributed radioactive source are [18.11]:
The radioactive source must be uniformly distributed;
The atomic composition of the medium must be homogeneous;
The density of the medium must be homogeneous;
No electric or magnetic fields may disturb the paths of the charged particle.
Charged particle equilibrium always exists if radiation equilibrium exists.
However, charged particle equilibrium can exist even if the conditions for
radiation equilibrium are not fulfilled.
If only charged particles are emitted from the radioactive source (as is
the case for emitters such as 90Y and 32P), charged particle equilibrium exists
if radiative losses are negligible. Radiative losses increase with increasing
electron energy and with an increase in the atomic number of the medium.
The maximum energy for pure emitters commonly used in nuclear medicine
(e.g. 90Y, 32P and 89Sr) is less than 2.5 MeV and the ratio of the radiative stopping
power to the total stopping power is 0.018 and 0.028 for skeletal muscle and
cortical bone, respectively, for an electron energy of 2.5 MeV. This would imply
that the radiative losses can be neglected in internal dosimetry and charged
particle equilibrium can be assumed.
If both charged and uncharged particles (photons) are emitted (as is the case
with most radionuclides used in nuclear medicine), charged particle equilibrium
exists if the interaction of the uncharged particles within the volume is negligible.
A negligible number of interactions means that the photon absorbed fraction is
low. Photon absorbed fractions as a function of mass can be seen in Fig. 18.5, but
it should be pointed out that the relative photon contribution for a radionuclide
is also dependent on the energy and the probability of emission of electrons. For
example, the photon contribution to the absorbed dose cannot be disregarded for
111
In in a 10 g sphere, where the photons contribute 45% to the total S value.
18.1.4.1. Non-uniform activity distribution
The activity distribution is seldom completely uniform over the whole
tissue. This effect was theoretically investigated on a macroscopic level by
Howell et al. [18.12] by introducing activity distributions that varied as a
633
CHAPTER 18
INTERNAL DOSIMETRY
that the cross-absorbed dose results from penetrating photon radiation only. It is
important to note that the cross-organ absorbed dose from high energy emitters,
such as 90Y and 32P, can be significant in preclinical small animal studies used to
study radiation toxicity. The importance of the cross-absorbed dose in comparison
to the self-absorbed dose strongly depends on both the S value and the relative
size of the time-integrated activity within the source and the target regions.
The MIRD formalism as such is equally applicable to any well defined
source and target region combinations [18.13, 18.14]. Depending on the volume
and dimensions of the regions, different types of emitted radiation will be of
different importance. To conclude the above discussion, a number of factors
causing non-uniformity in the absorbed dose distribution have been identified:
Edge effects due to lack of radiation equilibrium;
Lack of radiation equilibrium and charged particle equilibrium in the whole
volume (high energy electrons emitted in a small volume);
Few atoms in the volume, causing a lack of radiation equilibrium and
introduction of stochastic effects;
Temporal non-uniformity due to the kinetics of the radiopharmaceutical;
Gradients due to hot spots;
Interfaces between media causing backscatter;
Spatial non-uniformity in the activity distribution.
18.2. INTERNAL DOSIMETRY IN CLINICAL PRACTICE
18.2.1. Introduction
Internal dosimetry is performed with different purposes, which would
require different levels of accuracy in the calculated absorbed dose, depending
on the subgroup:
Dosimetry for diagnostic procedures utilized in nuclear medicine;
Dosimetry for therapeutic procedures (radionuclide therapy);
Dosimetry in conjunction with accidental intake of radionuclides.
The dosimetry for a diagnostic procedure is performed to optimize the
procedure concerning radiation protection consistent with the requirements
of an accurate diagnostic test. This is an optimization of a clinical procedure
applicable to all persons. The most relevant would, therefore, be to utilize
the mean pharmacokinetics for the radiopharmaceutical for the calculation
of the time-integrated activity and S values based on a reference man
635
CHAPTER 18
phantom. The ICRP has published the absorbed dose per injected activity
for most radiopharmaceuticals used for diagnostic procedures in the clinic in
Publication 53 [18.5], with updates published in Publications 80 and 106 [18.15,
18.16].
The purpose of performing dosimetry for a patient that receives radionuclide
therapy is to optimize the treatment so as to achieve the highest possible absorbed
dose to the tumour, consistent with absorbed dose limiting toxicities. Thus,
individualized treatment planning should be performed that takes into account
the patient specific pharmacokinetics and biodistribution of the therapeutic agent.
The procedure to apply after an accidental intake of radionuclides must be
decided on a case by case basis. The procedure to apply will depend on the level
of activity, which radionuclide, the number of persons involved, whether the
dosimetry is performed retrospectively or as a precaution, and whether there is a
possibility to perform measurements after the intake.
18.2.2. Dosimetry on an organ level
Dosimetry on an organ level could be performed from activity quantification
using either 2-D or 3-D images. Two dimensional images may include whole
body scans or spot views covering the regions of interest. Three dimensional
single photon emission computed tomography (SPECT) is mostly a limited field
of view study that includes only the essential structures of interest. The advantage
of 3-D tomographic methods is that they avoid the problems associated with
corrections for activity in overlying and underlying tissues (e.g. muscle, gut
and bone), and corrections for activity in partly overlapping tissues (e.g. liver
and right kidney). Three dimensional positron emission tomography (PET) is
emerging as a powerful dosimetric tool because of the greater ease and accuracy
of radiotracer quantification with this modality.
S value tables for human phantoms can be found in MIRD Pamphlet No. 11
[18.17], in the OLINDA EXM software [18.18] and on the RADAR web site
(www.doseinfo-radar.com). OLINDA/EXM stands for organ level internal dose
assessment/exponential modelling, and is a software for the calculation of
absorbed dose to different organs in the body. OLINDA includes S values for
most radionuclides and for ten different human phantoms (adult and children at
different ages as well as pregnant and non-pregnant female phantoms). Tumours
are not included in the phantoms, although the S values for unit density spheres
provided in the software could be applied for the calculation of the self-absorbed
dose to the tumour. OLINDA also includes a module for biokinetic analysis,
allowing the user to fit an exponential equation to the data entered on the activity
in an organ at different time points. S values can be scaled by mass within
636
INTERNAL DOSIMETRY
m phantom
m patient
(18.26)
Since it requires a great deal of work to determine the mass of every organ
for each patient, it was suggested that the S values might be scaled to the total
mass of the patient. This is a more crude method, assuming that the organ size
follows the total mass of the body. The lean body weight of the patient should be
used to avoid unrealistic values of the organ mass and, thus, the S values due to
obese or very lean patients.
S patient S phantom
m TB,phantom
m TB,patient
(18.27)
637
CHAPTER 18
F(r/r0)
can, thus, be calculated. Parametric images that display the biological half-life
for each voxel could also be produced by this technique.
The registration of the images acquired at different points in time after the
administration becomes essential for the accuracy that can be achieved in the
calculation of the time-integrated activity on a voxel level. Another important
factor that determines the accuracy in the time-integrated activity and, thus, in
the absorbed dose, is the acquired number of counts per voxel (a random error),
the accuracy in the attenuation correction (systematic error) and the calibration
factor that translates the number of counts to the activity (random and systematic
errors). Multimodality imaging such as SPECT/CT and PET/CT facilitates the
interpretation of the images as the CT will provide anatomical landmarks to support
the functional images, which could change from one acquisition to the next.
A dose point kernel describes the deposited energy as a function of distance
from the site of emission of the radiation. Figure 18.7 displays a dose point
kernel for 1 MeV mono-energetic electrons. Convolution of a dose point kernel
and the activity distribution from an image acquired at a certain time after the
injection gives the absorbed dose rate. Dose point kernels provide a tool for fast
calculation of the absorbed dose on a voxel level. However, the main drawback
is that a dose point kernel is only valid in a homogenous medium, where it is
commonly assumed that the body is uniformly unit density soft tissue.
Monte Carlo simulations that use the activity distribution from a functional
image (PET or SPECT) and the density distribution from a CT image avoid the
problem of non-uniform media, although full Monte Carlo simulations are time
consuming. EGS (electron gamma shower), MCNP (Monte Carlo N-particle
transport code), Geant and Penelope are commonly used Monte Carlo codes.
r/r0
FIG.18.7. A scaled dose point kernel for 1 MeV electrons [18.20]. r/r0 expresses the
distance scaled to the continuous slowing down approximation range of the electron and
F (r / r , E ) d(r / r ) = 1.
0
638
INTERNAL DOSIMETRY
639
CHAPTER 18
[18.10] INTERNATIONAL
COMMISSION
ON
RADIATION
UNITS
MEASUREMENTS, Microdosimetry, Rep. 36, ICRU, Bethesda, MD (1983).
AND
[18.11] ATTIX, F.H., Introduction to Radiological Physics and Radiation Dosimetry, John
Wiley & Sons, New York (1986).
[18.12] HOWELL, R.W., RAO, D.V., SASTRY, K.S.R., Macroscopic dosimetry for
radioimmunotherapy: Nonuniform activity distributions in solid tumours, Med.
Phys. 16 (1989) 6674.
[18.13] HOWELL, R.W., The MIRD schema: From organ to cellular dimensions, J. Nucl.
Med. 35 (1994) 531533.
[18.14] KASSIS, I.E., The MIRD approach: Remembering the limitations, J. Nucl. Med. 33
(1992) 781782.
[18.15] INTERNATIONAL COMMISSION ON RADIOLOGICAL PROTECTION,
Radiation Dose to Patients from Radiopharmaceuticals (Addendum to ICRP
Publication 53), Publication 80, Pergamon Press, Oxford and New York (1998).
[18.16] INTERNATIONAL COMMISSION ON RADIOLOGICAL PROTECTION,
Radiation Dose to Patients from Radiopharmaceuticals (Addendum 3 to ICRP
Publication 53), Publication 106, Elsevier (2008).
[18.17] SNYDER, W.S., FORD, M.R., WARNER, G.G., WATSON, S.B., MIRD Pamphlet
No. 11, S, Absorbed Dose per Unit Cumulated Activity for Selected Radionuclides and
Organs, The Society of Nuclear Medicine, Reston, VA (1975).
[18.18] STABIN, M.G., SPARKS, R.B., CROWE, E., OLINDA/EXM: The second-generation
personal computer software for internal dose assessment in nuclear medicine, J. Nucl.
Med. 46 (2005) 10231027.
[18.19] STABIN, M.G., MIRDOSE: Personal computer software for internal dose assessment
in nuclear medicine, J. Nucl. Med. 37 (1996) 538546.
[18.20] BERGER, M., Improved point kernels for electron and beta-ray dosimetry,
NBSIR 73107, National Bureau of Standards (1973).
640
CHAPTER 19
RADIONUCLIDE THERAPY
G. FLUX1, YONG DU2
Joint Department of Physics1 and Nuclear Medicine2,
Royal Marsden Hospital and Institute of Cancer Research,
Surrey, United Kingdom
19.1. INTRODUCTION
Cancer has been treated with radiopharmaceuticals since the 1940s. The
radionuclides originally used, including 131I and 32P, are still in use. The role of
the physicist in radionuclide therapy encompasses radiation protection, imaging
and dosimetry. Radiation protection is of particular importance given the high
activities of the unsealed sources that are often administered, and must take into
account medical staff, comforters and carers, and, as patients are discharged
while still retaining activity, members of the public. Regulations concerning
acceptable levels of exposure vary from country to country. If the administered
radiopharmaceutical is a emitter, then imaging can be performed which may be
either qualitative or quantitative. While a regular system of quality control must
be in place to prevent misinterpretation of image data, qualitative imaging does
not usually rely on the image corrections necessary to determine the absolute
levels of activity that are localized in the patient. Accurate quantitative imaging is
dependent on these corrections and can permit the distribution of absorbed doses
delivered to the patient to be determined with sufficient accuracy to be clinically
beneficial.
Historically, the majority of radionuclide therapies have entailed the
administration of activities that are either fixed, or may be based on patient
weight or body surface area. This follows methods of administration necessarily
adopted for chemotherapy. However, given that in vivo imaging is possible for
many radiopharmaceuticals and that the mechanism of therapy is the delivery
of a radiation absorbed dose, the principles of external beam radiation therapy
apply equally to radionuclide therapies. These are summarized in European
Directive 97/43:
For all medical exposure of individuals for radiotherapeutic purposes
exposures of target volumes shall be individually planned; taking into
account that doses of non-target volumes and tissues shall be as low as
641
CHAPTER 19
RADIONUCLIDE THERAPY
CHAPTER 19
time interval between ablation and the determination of success, which itself is
subject to debate. The issue that most affects the physicist is that of standardized
versus personalized treatments, which has been debated since the early 1960s.
Fixed activities given for ablation can vary from 1100 to 4500MBq, and
those given for subsequent therapy procedures can be in excess of 9000MBq.
Published guidelines report the variation in fixed activities but do not make
recommendations concerning these levels.
It has been conclusively demonstrated in a number of dosimetry studies
that patients administered fixed activities of radioiodine receive absorbed doses
to remnant tissue, residual disease and to normal organs that can vary by several
orders of magnitude. This potentially has important consequences, as it implies
that, in many cases, patients may be receiving less absorbed dose than is required
for a successful ablation or therapy, while in other cases patients will receive
absorbed doses to malignant and normal tissues that are excessively higher than
necessary. Undertreatment will result in further administrations of radioiodine
with the risk of dedifferentiation over time, so that tumours become less iodine
avid. Overtreatment can result in unnecessary toxicity which can take the
form of sialadenitis and pancytopenia. Radiation pneumonitis and pulmonary
fibrosis have been seen in patients with diffuse lung metastases, and there is a
risk of leukaemia in patients receiving high cumulative activities. Personalized
treatments were first explored in the 1960s with patients administered activities
required to deliver a 2 Gy absorbed dose to the blood and constraints regarding
radioactive uptake levels at 48 h. Further approaches have been taken, based on
whole body absorbed doses, which can be considered a surrogate for absorbed
doses to the red marrow.
Dosimetry for thyroid ablations presents a different set of challenges to
that performed for therapies. In the former case, the small volume of remnant
tissue can render delineation inaccurate, which subsequently impinges on the
accuracy of the dose calculation. Therapies of metastatic disease can involve
larger volumes, although heterogeneous uptake is frequently encountered and
lung metastases in particular require careful image registration and attenuation
correction (Fig.19.1).
A current issue is that of stunning, whereby a tracer level of activity may
mitigate further uptake for an ablation or therapy. This phenomenon would have
consequences for individualized treatment planning, although at present its extent
and indeed its existence is being contested. However, it is not infrequent that a
greater extent of uptake may be seen from a larger therapy administration than
from a tracer administration (Fig.19.1).
While subject to national regulations, patients receiving radioiodine
treatment frequently require in-patient monitoring until retention of activity falls
to levels acceptable to allow contact with family members and the public. It is,
644
RADIOnUCLIDE tHERAPY
therefore, necessary for the physicist to give strict advice on radiation protection,
taking into account the patients home circumstances.
FIG. 19.1. Absorbed dose maps resulting from a tracer administration of 118 MBq 131I-NaI
(left) and, subsequently, 8193 MBq 131I-NaI (right) for therapy (maximum absorbed dose:
90 Gy). The absorbed doses were calculated using 3-D dosimetry on a voxel by voxel basis.
CHAPTER 19
and the IAEA. However, no trials have yet been performed to assess the optimal
timing or levels of administration.
19.3.1. Treatment specific issues
The main issue concerning the use of radiopharmaceuticals for the treatment
of bone pain is that of determining the ideal treatment protocol, including the
optimal radionuclide to use, and whether this should be standardized or could be
modified on an individual patient basis. In practice, local logistics and availability
will have a strong impact on the radionuclide of choice. It is of particular interest
that the radionuclides used vary widely in terms of their emissions. Arguments
can be made to support both approaches, in that the longer range emitters may
be more likely to target all of the disease, while the shorter range emitters
(and particularly an emitter) will avoid unnecessary toxicity. There is also a
wide range of physical half-lives between these radionuclides and there is some
evidence to suggest that the longer lived 89Sr can produce a response that takes
longer to occur but that is longer lasting.
Dosimetry for bone pain palliation is challenging due to the difficulties
of assessing the distribution of uptake in newly formed trabecular bone and its
geometrical relation to viable red marrow and to disease. Nevertheless, models
have been developed to address this interesting problem and a statistically
significant correlation has been demonstrated between whole body absorbed
doses and haematological toxicity. Dosimetry for other radionuclides is highly
dependent on the imaging properties of these radionuclides, although it could
potentially be used to increase administered activities in individual patients.
19.4. HEPATIC CANCER
Hepatocellular carcinoma is a major cause of cancer deaths. In recent
years, primary and secondary liver cancers have been treated with a range of
radionuclides administered intra-arterially, based on the fact that while the
liver has a joint blood supply, tumours are supplied only by the hepatic artery.
The advantage of this approach is that treatments can be highly selective and
canminimize absorbed doses delivered to normal organs, including healthy
liver. This procedure requires interventional radiology as the activity must
be administered directly into the common, right or left hepatic artery via an
angiographic catheter under radiological control and so is a prime example of
the multidisciplinary nature of radionuclide therapy. Prior to administration,
a diagnostic level of 99mTc-macroaggregate of albumin (MAA) is given to
646
RADIONUCLIDE THERAPY
ascertain the likelihood of activity shunting to the lung. This is usually evaluated
semi-quantitatively.
To date, two commercial products have received FDA approval, classified
as medical devices rather than as drugs. Both use 90Y. Theraspheres comprise 90Y
incorporated into small silica beads and SIR-Spheres consist of 90Y incorporated
into resin. Lipiodol, a mixture of iodized ethyl esters of the fatty acids of poppy
seed oil, has also been explored for intra-arterial administration. Lipiodol
has been radiolabelled with both 131I and 188Re, the latter having the benefit
of superior imaging properties, a longer path length and fewer concerns for
radiation protection due to the shorter half-life.
19.4.1. Treatment specific issues
As with other therapies, outstanding issues include the optimal activity
to administer, which is usually based on patient weight or body surface area,
arteriovenous shunting observed prior to treatment and the extent of tumour
involvement. However, there have been examples of treatments planned
according to estimated absorbed doses delivered to the normal liver and this
treatment offers the potential for individualized treatment planning based on
potential toxicity. Radiobiological consequences have been considered by
conversion of absorbed doses to biologically effective doses and there are
tentative conclusions that multiple treatments may deliver higher absorbed doses
to tumours whileminimizing absorbed doses to normal liver.
A particular issue of this treatment concerning the physicist is that
of imaging, due to the need to ascertain lung uptake from the pre-therapy
99m
Tc-MAA scan, and the possibility of bremsstrahlung imaging as a basis for
calculation of absorbed doses delivered at therapy.
19.5. NEUROENDOCRINE TUMOURS
Neuroendocrine tumours (NETs) arise from cells that are of neural crest
origin and usually produce hormones. There are several types of neuroendocrine
cancer, including phaeochromocytoma, which originates in the chromaffin cells
of the adrenal medulla, and paraganglioma, which develops in extra-adrenal
ganglia, often the abdomen. Carcinoid tumours are slow growing and arise
mainly in the appendix or small intestine although they can also be found in
the lung, kidney and pancreas. Medullary thyroid cancer is a special case of an
NET that arises from the parafollicular cells of the thyroid gland, which produce
calcitonin. For the purposes of radionuclide therapy, NETs tend to be considered
as one malignancy and similar radiopharmaceutical treatments are administered,
647
CHAPTER 19
RADIONUCLIDE THERAPY
extensive research in recent years, and must deal with problems resulting from
camera dead time, photon scatter and attenuation. Image based dosimetry of
90
Y labelled pharmaceuticals has been performed using low levels of 111In given
either prior to therapy or included with the therapy administration. More recently,
bremsstrahlung imaging has been developed to enable dosimetry to be performed
directly.
19.6. NON-HODGKINS LYMPHOMA
Of the malignancies arising from haematological tissues, non-Hodgkins
lymphoma is most commonly targeted with radiopharmaceuticals. Various forms
of lymphoma are classified into high grade or low grade, depending on the rate of
growth. Lymphomas are inherently radiosensitive, express a number of antigens
and can be successfully treated with radioimmunotherapy (RIT) using monoclonal
antibodies radiolabelled usually with either 131I or 90Y. A number of radiolabelled
monoclonal antibodies have been developed and two, 90Y-Ibritumomab Tiuxitan
(Zevalin) and 131I-Tositumomab (Bexxar), have received FDA approval. Both
target the B-cell specific CD 20 antigen and have been used successfully in a
number of clinical trials. Both agents have demonstrated superior therapeutic
efficacy to prior chemotherapies in various clinical settings.
As with chemotherapy, RIT is more successful when administered at an
early stage of disease. Clinical trials are ongoing to determine how to more
effectively integrate RIT into the current clinical management algorithm in
lymphoma patients.
19.6.1. Treatment specific issues
Internal dosimetry has been applied to a number of studies using RIT with
varying results and conclusions. Initial dosimetry trials for Zevalin found that
while absorbed doses were delivered to tumours and to critical organs that varied
by at least tenfold, these did not correlate with toxicity or response, and that at
the levels of activity prescribed, the treatment was deemed to be safe, obviating
the need for individualized dose calculations, and treatment now tends to be
administered based on patient weight. However, FDA approval incorporated
the need for biodistribution studies to be performed prior to therapy using the
antibody radiolabelled with 111In as a surrogate for 90Y under the assumption that
the tracer kinetics would translate into clinical therapy and a number of studies
are now concerned with assessing the biodistribution and dosimetry based on
bremsstrahlung imaging (Fig.19.2).
649
CHAPtER 19
FIG. 19.2. An absorbed dose map (maximum dose: 39 Gy) resulting from 3-D dosimetry
of bremsstrahlung data acquired from treatment of non-Hodgkins lymphoma with
90
Y-Ibritumomab Tiuxitan (Zevalin).
650
RADIONUCLIDE THERAPY
651
CHAPTER 19
RADIONUCLIDE THERAPY
imaging, particularly where high energy emitters such as 131I are used for therapy.
Corrections can be applied with relative ease by assessing and subtracting the
scatter contribution from one or more energy windows placed adjacent to the
peak energy window. Attenuation correction is essential to quantitative imaging
and can be performed using a variety of methods. These can range from a
straightforward approach that assumes the patient consists entirely of water,
to more sophisticated methods that take into account the electron density on a
voxel by voxel basis. Dead time corrections are frequently overlooked in the
imaging of patients undergoing radionuclide therapy as these seldom require
consideration for diagnostic scanning. However, this is an issue of paramount
importance that will severely inhibit accurate quantification if ignored. Again,
this is a particularly significant factor when using high activities of 131I, and it is
essential that each camera is characterized accordingly prior to image processing
and analysis.
Accurate image quantification enables further avenues of research and
development that have only recently begun to emerge as substantial areas of study.
Pharmacokinetic analysis, derived from sequential scanning, can allow inter- and
intra-patient variations in uptake and retention to be calculated which can aid
understanding and optimization of a radiopharmaceutical. This is particularly
relevant for new products. Accurate analysis is dependent on the acquisition of
sufficient statistics and data which must include the number and timing of scans.
Inherent errors and uncertainties should be considered where possible.
Quantitative imaging and analysis facilitate the accomplishment of
accurate internal dosimetry calculations, which are of paramount importance to
patient specific treatment planning and which are now becoming mandatory for
new products and necessary for the acquisition of the evidence base on which
treatments should be performed. Although isolated studies into image based,
patient specific dosimetry have been performed for many years, this remains a
newly emerging field for which no standardized protocols or guidelines exist.
This puts greater responsibility on the physicist whose role must be to advise
on image acquisition. This invariably entails balancing scientific requirements
with local resource restrictions. As there is only limited software available for
dosimetry calculations at present, it may prove necessary to develop software or
spreadsheets to perform absorbed dose calculations.
Essentially, the end point of such calculations should lead to a prediction of
absorbed doses to the tumour and critical organs from a given administration and
confirmation of the absorbed doses delivered after the therapy has been performed.
However, interpretation and understanding of the biological relevance of these
absorbed doses is not straightforward. Radiobiology for radionuclide therapy has
not been developed at the rate seen for external beam radiotherapy (EBRT) but
is now attracting more attention. There are both biological and physics aspects
653
CHAPTER 19
RADIONUCLIDE THERAPY
213
Bi or 225Ac for the treatment of leukaemia and 223Ra for the treatment of bone
metastases. Dosimetry for emitters remains largely unexplored and is subject to
a number of challenges due to the difficulty of localization and the need to take
into account the emissions of daughter products, which may not remain at the
initial site of uptake.
The introduction of more stringent regulatory procedures will increase
the need for accurate internal dosimetry. The FDA now requires dosimetric
evaluation of new radiopharmaceuticals, and it is becoming commonplace for
new uses of existing agents to also be the subject of a phase I/II clinical trial to
ascertain absorbed doses delivered to critical organs. As brief palliative effects
translate into longer lasting survival, critical organ dosimetry will become more
important to ensureminimization of unnecessary late toxicity. However, it should
be noted that basic dosimetry calculations aimed at estimating critical organ
absorbed doses are not necessarily sufficient to ensure an optimal treatment
protocol, which must take tumour dosimetry into account.
The increasing trend towards accountability and evidence based medicine
will require adherence to strict radiation protection procedures for patients,
families and staff, and it may become necessary to assess exposure, particularly
to family members, with greater accuracy.
Scientific developments are likely to proceed rapidly, and many are now
within the reach of departments that have only basic research facilities, since
accurate absorbed dose calculations can be obtained from careful imaging
procedures and a relatively simple spreadsheet. Individualization of absorbed
dose calculations can then be achieved. Dosimetry based treatment planning
will also become an essential element of patient management as options of
chemotherapy or EBRT administered concomitantly with radiopharmaceuticals
are explored. The practice of internal dosimetry itself continues to evolve and
can be divided into categories that require different approaches. In addition
to image based dosimetry, these include whole body dosimetry, blood based
dosimetry and model based dosimetry. A particular focus at present is on
red marrow dosimetry, as this is the absorbed dose limiting organ for many
therapies.
Multi-centre prospective data collection is critical to the development
of this field, and international networks will be required to accrue a sufficient
number of patient statistics to enable the formulation of agreed and standardized
treatment protocols.
655
CHAPTER 19
19.10. CONCLUSIONS
Nuclear medicine physics is playing an increasingly important role
in the service and management of radionuclide therapies. In addition to the
tasks traditionally associated with nuclear medicine, which primarily involve
the maintenance of imaging and associated equipment, radiation protection
and administration of national regulations, there is a growing requirement for
patient specific treatment planning which requires quantitative imaging, internal
dosimetry and radiobiological considerations.
While radionuclide therapy is usually performed within nuclear medicine, it
is often to be found within clinical or medical oncology, or within endocrinology.
In effect, radionuclide therapy requires a multidisciplinary approach that involves
diverse groups of staff. The adoption of treatments based on individualized
biokinetics, obtained from imaging and external retention measurements, places
the physicist more centrally within this network, as is seen in EBRT.
There is currently the need for increased training in this field, and due
to the relatively low numbers of patients treated even at specialist centres,
multi-centre networks will facilitate the exchange of expertise and the gathering
of prospective data necessary to advance the field. As this hitherto overlooked
area of cancer management expands, the scientific opportunities available to the
nuclear medicine physicist will also increase.
BIBLIOGRAPHY
BUFFA, F.M., et al., A model-based method for the prediction of whole-body absorbed dose
and bone marrow toxicity for Re-186-HEDP treatment of skeletal metastases from prostate
cancer, Eur. J. Nucl. Med. Mol. Imaging 30 (2003) 11141124.
CREMONESI, M., et al., Radioembolisation with Y-90-microspheres: dosimetric and
radiobiological investigation for multi-cycle treatment, Eur. J. Nucl. Med. Mol. Imaging 35
(2008) 20882096.
DU, Y., HONEYCHURCH, J., JOHNSON, P., GLENNIW, M., ILLIDGE, T., Microdosimetry
and intratumoral localization of administered 131I labelled monoclonal antibodies are critical to
successful radioimmunotherapy of lymphoma, Cancer Res. 67 (2003) 13351343.
GAZE, M.N., et al., Feasibility of dosimetry-based high-dose I-131-meta-iodobenzylguanidine
with topotecan as a radiosensitizer in children with metastatic neuroblastoma, Cancer Biother.
Radiopharm. 20 (2005) 195199.
LASSMANN, M., LUSTER, M., HANSCHEID, H., REINERS, C., Impact of I-131 diagnostic
activities on the biokinetics of thyroid remnants, J. Nucl. Med. 45 (2004) 619625.
656
RADIONUCLIDE THERAPY
MADSEN, M., PONTO, J., Handbook of Nuclear Medicine, Medical Physics Publishing,
Madison, WI (1992).
NINKOVIC, M.M., RAICEVIC, J.J., ADROVIC, F., Air kerma rate constants for gamma
emitters used most often in practice, Radiat. Prot. Dosimetry 115 (2005) 247250.
STOKKEL, M.P., HANDKIEWICZ JUNAK, D., LASSMANN, M., DIETLEIN, M.,
LUSTER, M., EANM procedure guidelines for therapy of benign thyroid disease, Eur. J. Nucl.
Med. Mol. Imaging 37 (2010) 22182228.
657
CHAPTER 20
MANAGEMENT OF THERAPY PATIENTS
L.T. DAUER
Department of Medical Physics,
Memorial Sloan Kettering Cancer Center,
New York, United States of America
20.1. INTRODUCTION
The basic principles of radiation protection and their implementation as
they apply to nuclear medicine are covered in general in Chapter3. This chapter
will look at the specific case of nuclear medicine used for therapy. In addition
to the standards discussed in Chapter3, specific guidance on the release of
patients after radionuclide therapy can be found in the IAEAs Safety Reports
Series No. 63 [20.1].
When the patient is kept in hospital following radionuclide therapy, the
people at risk of exposure include hospital staff whose duties may or may not
directly involve the use of radiation. This can be a significant problem. However,
it is generally felt that it can be effectively managed with well trained staff and
appropriate facilities. On the other hand, once the patient has been released, the
groups at risk include members of the patients family, including children, and
carers; they may also include neighbours, visitors to the household, co-workers,
those encountered in public places, on public transport or at public events, and
finally, the general public. It is generally felt that these risks can be effectively
mitigated by the radiation protection officer (RPO) with patient-specific radiation
safety precaution instructions.
20.2. OCCUPATIONAL EXPOSURE
20.2.1. Protective equipment and tools
Protective clothing should be used in radionuclide therapy areas where
there is a likelihood of contamination. The clothing serves both to protect the
body of the wearer and to help to prevent the transfer of contamination to other
areas. Protective clothing should be removed prior to going to other areas such as
staff rooms. The protective clothing may include laboratory gowns, waterproof
658
gloves, overshoes, and caps and masks for aseptic work. When emitters are
handled, the gloves should be thick enough to protect against external radiation
(perhaps double gloves should be utilized, when appropriate).
In radionuclide therapy nuclear medicine, most of the occupational
exposures come from 131I, which emits 356keV photons. The attenuation by a
lead apron at this energy isminimal (less than a factor of two) and is unlikely to
result in significant dose reductions and may not justify the additional weight and
discomfort of wearing such protective equipment. Typically, thicker permanent
or mobile lead shielding may be more effectively applied for those situations
which warrant its use. The RPO should determine the need and types of shielding
required for each situation.
20.2.2. Individual monitoring
Individual monitoring, as discussed in Chapter3, needs to be considered
during the management of radionuclide therapy patients. In addition to general
advice (seeChapter3) on persons most likely to require individual monitoring in
nuclear medicine, consideration needs to be given to nursing or other staff who
spend time with therapy patients.
20.3. RELEASE OF THE PATIENT
Protection of the patient in therapeutic nuclear medicine is afforded
through the application of the principles of justification and optimization the
principle of dose limitation is not applied to patient exposures. A discussion of
these principles is given in Chapter3. However, a patient that has undergone a
therapeutic nuclear medicine procedure is a source of radiation that can lead to
the exposure of other persons that come into the proximity of the patient. External
irradiation of the persons close to the patient is related to the radionuclide used,
its emissions, half-life and biokinetics, which can be important with some
radionuclides. Excretion results in the possibility of contamination of the patients
environment and of inadvertent ingestion by other persons.
The system of radiation protection handles, in different ways, people that
may be exposed by therapeutic nuclear medicine patients. If the person is in close
proximity because their occupation requires it, then they are subject to the system
of radiation protection for occupationally exposed persons. If the person, other
than occupationally, is knowingly and voluntarily providing care, comfort and
support to the patient, then their exposure is considered part of medical exposure,
and they are subject to dose constraints as discussed in Chapter3. If the person
is simply a member of the public, including persons whose work in the nuclear
659
CHAPTER 20
medicine facility does not involve working with radiation, then their exposure is
part of public exposure and that is discussed in the next section.
While precautions for the public are rarely required after diagnostic nuclear
medicine procedures, some therapeutic nuclear medicine procedures, particularly
those involving 131I, can result in significant exposure to other people, especially
those involved in the care and support of patients. Hence, members of the public
caring for such patients in hospital or at home require individual consideration.
20.3.1. The decision to release the patient
Patients do not need to be hospitalized automatically after all radionuclide
therapies. Relevant national dose limits must be met and the principle of
optimization of protection must be applied, including the use of relevant dose
constraints. The decision to hospitalize or release a patient should be determined
on an individual basis. In addition to residual activity in the patient, the decision
should take many other factors into account. Hospitalization will reduce
exposure to the public and relatives, but will increase exposure to hospital staff.
Hospitalization often involves a significant psychological burden as well as
monetary and other costs that should be analysed and justified.
Medical practitioners shall determine whether the patient is willing and
is physically and mentally able to comply with appropriate radiation safety
precautions in the medical facility, should medical confinement be necessary, or
at home after release. For some patients, hospitalization during and following
treatment may be necessary and appropriate. The medical practitioners can
determine that such patients may need to remain hospitalized beyond the period of
time dictated by other dose constraint or clinical criteria. For example, incontinent
patients or ostomy patients may require extended hospitalization to ensure safe
collection and disposal of radioactively contaminated body wastes. Where the
social system and infrastructure is such that there may be contamination risks
from discharged patients, it may be necessary to hospitalize the patient or extend
the normal hospitalization time, to avoid risk to the environment or other persons
[20.1].
The decision to hospitalize or release a patient after therapy should be made
on an individual basis considering several factors including residual activity in
the patient, the patients wishes, family considerations (particularly the presence
of children), environmental factors, and existing guidance and regulations. The
nuclear medicine physician has the responsibility to ensure that no patient who
has undergone a therapeutic procedure with unsealed sources is discharged from
the nuclear medicine facility until it has been established by either a medical
physicist or by the facilitys RPO that the activity of radioactive substances in
the body is such that the doses that may be received by members of the public
660
and family members would meet national criteria, including compliance with
relevant dose limits and the application of relevant dose constraints. Iodine-131
typically results in the largest dose to medical staff, the public, caregivers and
relatives. Other radionuclides used in therapy are usually simple emitters
(e.g. 32P, 89Sr and 90Y) that pose much less risk.
The modes of exposure to other people are: external exposure, internal
exposure due to contamination, and environmental pathways. The dose to
adults from patients is mainly due to external exposure. Internal contamination
of family members is most likely in the first seven days after treatment. In
most circumstances, the risks from internal contamination of others are less
significant than those from external exposure [20.1]. In general, contamination
of adults is less important than external exposure. However, contamination of
infants and children with saliva from a patient could result in significant doses
to the childs thyroid [20.2]. Therefore, it is important to avoid contamination
(particularly from saliva) of infants, young children and pregnant women owing
to the sensitivity of fetal and paediatric thyroids to cancer induction [20.1,
20.3]. Written instructions to the patient concerning contact with other persons
and relevant precautions for radiation protection must be provided as necessary
(seeSection 20.3.2).
The day to day management of hospitalization and release of patients should
be the responsibility of the licensee. In applying dose constraints, registrants and
licensees should have a system to measure or estimate the activity in patients
prior to discharge and assess the dose likely to be received by members of the
household and members of the public. The result should be recorded. A method to
estimate the acceptable activity of radiopharmaceuticals for patients on discharge
from hospitals is to calculate the time integral of the ambient dose equivalent rate
and compare it with the constraints for patient comforters, or for other persons
who may spend time close to the patient. For this calculation, either a simple
conservative approach based on the physical half-life of the radionuclide or a
more realistic one, based on patient-specific effective half-life, can be used. The
assumptions made in these calculations with regard to time (occupancy factors)
and distance should be consistent with the instructions given to patients and
comforters at the time the patient is discharged from hospital. In the calculation
of the effective half-life, the behaviour of 131I can be modelled using two
components for the biological half-life: the extra-thyroidal (i.e. existing outside
the thyroid) iodine and thyroidal iodine following uptake by thyroid tissue. The
assumptions used often err on the side of caution; it is sometimes felt that they
significantly overestimate the potential doses to carers and the public. Examples
of such calculations are found in the literature [20.4, 20.5]. Further guidance on
radiation protection following radionuclide therapy can be found in Ref.[20.1]
(especially in annex II).
661
CHAPTER 20
CHAPTER 20
written statement of the therapy and radionuclide used, for the patient to carry.
The IAEA gives an example of a credit card-style card that might be given to a
patient at the time of discharge (seeFig.20.1) [20.1]. Personnel operating such
detectors should be specifically trained to identify and deal with nuclear medicine
patients. Records of the specifics of therapy with unsealed radionuclides should
be maintained at the hospital and given to the patient along with written
precautionary instructions [20.2].
(a)
(b)
FIG.20.1. Example of a credit card-style card that might be given to a patient at the time of
discharge: (a)front side; (b)rear side [20.1].
664
CHAPTER 20
excreted radioactivity levels are low enough to be discharged through the toilet in
their home without exceeding public dose limits. The guidelines given to patients
will protect their family, carers and neighbours, provided the patient follows
these guidelines.
20.5. RADIONUCLIDE THERAPY TREATMENT ROOMS AND WARDS
The following aims should be considered in the design of radionuclide
therapy treatment rooms and wards: optimizing the exposure to external radiation
and contamination, maintaining low radiation background levels to avoid
interference with imaging equipment, meeting pharmaceutical requirements, and
ensuring safety and security of sources (locking and control of access).
Typically, rooms for high activity patients should have separate toilet
and washing facilities. The design of safe and comfortable accommodation
for visitors is important. Floors and other surfaces should be covered with
smooth, continuous and non-absorbent surfaces that can be easily cleaned and
decontaminated. Secure areas should be provided with bins for the temporary
storage of linen and waste contaminated with radioactive substances.
20.5.1. Shielding for control of external dose
Radiation sources used in radiopharmaceutical therapy have the potential
to contribute significant doses to medical personnel and others who may spend
time within or adjacent to rooms that contain radiation sources. Meaningful
dose reduction and contamination control can be achieved through the use
of appropriate facility and room design. Shielding should be designed using
source related source constraints for staff and the public. The shielding should
be designed using the principles of optimization of protection and taking into
consideration the classification of the areas within it, the type of work to be done
and the radionuclides (and their activity) intended to be used. It is convenient to
shield the source, where possible, rather than the room or the person. Structural
shielding is, in general, not necessary for most of the areas of a nuclear medicine
department. However, the need for wall shielding should be assessed in the
design of a therapy ward to protect other patients and staff, and in the design of
rooms housing sensitive instruments (e.g. well counters and gamma cameras) to
keep a low background.
Special consideration should be given to avoiding interference with work
in adjoining areas, such as imaging or counting procedures, or where fogging of
films stored nearby can occur. Imaging rooms are usually not controlled areas.
666
Half-value layer
Tenth-value layer
Lead [20.16]
3.0mm
11mm
Concrete [20.17]
5.5cm
18cm
667
CHAPTER 20
Occupational worker
10
0.2
0.5
0.01
CHAPTER 20
(b)
It should be noted, however, that the ISO radiation symbol is not intended
to be a warning signal of danger but only of the existence of radioactive
material. A new symbol has been launched by the IAEA and the ISO to help
reduce needless deaths and serious injuries from accidental exposure to large
radioactive sources [20.22]. It will serve as a supplementary warning to the
trefoil, which has no intuitive meaning and little recognition beyond those
educated in its significance. The new symbol is intended for IAEA category 1, 2
and 3 sources [20.23] defined as dangerous sources capable of death or serious
injury, including food irradiators, teletherapy machines for cancer treatment and
industrial radiography units. The symbol is to be placed on the device housing
the source, as a warning not to dismantle the device or to get any closer. It
will not be visible under normal use, only if someone attempts to disassemble
the device. For radionuclide therapy applications, the new symbol will not be
located on building access doors, transport packages or containers. Rather, the
671
CHAPTER 20
672
CHAPTER 20
674
CHAPTER 20
CHAPTER 20
678
679
CHAPTER 20
680
CHAPTER 20
would exist even if a crematorium were to handle several bodies per year
containing 131I.
REFERENCES
[20.1] INTERNATIONAL ATOMIC ENERGY AGENCY, Release of Patients After
Radionuclide Therapy, Safety Reports Series No. 63, IAEA, Vienna (2009).
[20.2] INTERNATIONAL COMMISSION ON RADIOLOGICAL PROTECTION,
Radiological Protection in Medicine, Publication 105, Elsevier, Oxford (2008).
[20.3] INTERNATIONAL COMMISSION ON RADIOLOGICAL PROTECTION,
Recommendations of the ICRP, Publication 103, ICRP, Elsevier, Oxford (2008).
[20.4] NUCLEAR REGULATORY COMMISSION, Consolidated Guidance about Materials
Licensees, Rep. NUREG-1556, Vol. 9, Office of Standards Development, Washington,
DC (1998).
[20.5] NATIONAL COUNCIL ON RADIATION PROTECTION AND MEASUREMENT,
Management of Radionuclide Therapy Patients, Rep. No. 155, Bethesda, MD (2006).
[20.6] ZANZONICO, P.B., SIEGEL, J.A., ST. GERMAIN, J., A generalized algorithm for
determining the time of release and the duration of post-release radiation precautions
following radionuclide therapy, Health Phys. 78 (2000) 648659.
[20.7] INTERNATIONAL ATOMIC ENERGY AGENCY, Applying Radiation Safety
Standards in Nuclear Medicine, Safety Reports Series No. 40, IAEA, Vienna (2005).
[20.8] INTERNATIONAL COMMISSION ON RADIOLOGICAL PROTECTION, Release
of Patients After Therapy with Unsealed Sources, Publication 94, Pergamon Press,
Oxford (2004).
[20.9] STRAUSS, J., BARBIERI, R.L. (Eds), Yen and Jaffes Reproductive Endocrinology,
6th edn, Saunders, Elsevier, Philadelphia, PA (2009).
[20.10] DAUER, L.T., WILLIAMSON, M.J., ST. GERMAIN, J., STRAUSS, H.W., Tl-201
stress tests and homeland security, J. Nucl. Cardiol. 14 (2007) 582588.
[20.11] DAUER, L.T., STRAUSS, H.W., ST. GERMAIN, J., Responding to nuclear granny,
J. Nucl. Cardiol. 14 (2007) 904905.
[20.12] INTERNATIONAL COMMISSION ON RADIOLOGICAL PROTECTION,
Radiological Protection in Medicine, Publication 73, Pergamon Press, Oxford (1996).
[20.13] INTERNATIONAL ATOMIC ENERGY AGENCY, Applications of the Concepts of
Exclusion, Exemption and Clearance, IAEA Safety Standards Series No. RS-G-1.7,
IAEA, Vienna (2004).
[20.14] INTERNATIONAL ATOMIC ENERGY AGENCY, Regulatory Control of Radioactive
Discharges to the Environment, IAEA Safety Standards Series No. WS-G-2.3, IAEA,
Vienna (2000).
682
683
Appendix I
ARTEFACTS AND TROUBLESHOOTING
E. BUSEMANN SOKOLE
Department of Nuclear Medicine,
Academic Medical Center,
Amsterdam, Netherlands
N.J. FORWOOD
Department of Nuclear Medicine,
Royal North Shore Hospital,
Sydney, Australia
I.1. THE ART OF TROUBLESHOOTING
I.1.1. Basics
Troubleshooting refers to the process of recognizing and identifying the
cause of an artefact, a malfunction or a problem in an instrument. The problem
could be immediately obvious, for example, the instrument does not work at all
or a particular component stops working (such as the computer, the mechanism
for whole body scanning or the automatic mechanism for collimator exchange).
The malfunction could also be less obvious, and be recognized only by an
abnormality in the expected result (such as the pattern formed by a defective
photomultiplier tube (PMT) in the gamma camera clinical or quality control
(QC) image or an unexpected calibration result in a radionuclide dose calibrator).
Such an abnormality is generally referred to as an artefact, in particular, when
observed in images.
The malfunctioning of an instrument can occur at any time. It might
become evident from a routine QC test. However, it is especially stressful when
it occurs during a patient investigation. In such a situation, the first lines of action
are tominimize the distress to the patient that a problem has occurred, to remain
calm and clear headed, to immediately try to identify the problem and correct it,
if possible, and to decide whether the investigation can be continued, either on
the same instrument or another similar one, or whether the investigation must be
rescheduled. An action flow chart is useful in the decision making process. Such
a flow chart is shown in Fig.I.1 for actions following a QC test.
684
- recognizing
FIG.I.1. Decision tree suggested for performance, evaluation and follow-up of a quality
control test. The symbols indicate: a start or end; b process to be performed; c
protocol; d intermediate results; e checks required; f decision to be made; g action
taken. Question answer: Y yes; N no.
APPENDIX I
687
APPENDIX I
FIG.I.2. Example of the digital documentation (in Dutch, with log translation) of a gamma
camera gantry error. This is one item from the digital database (developed in house) of
a troubleshooting log. The error report includes the instrument type, the date of problem
report, action priority, room location, name of responsible person, company information, log
describing the problem, first-line actions taken and their results. A photograph is included of
the gantry error message readout.
(e) Enter all problems and as much related data as possible into the log book
specific for the instrument. The solutions should also be documented. A
well documented and maintained digital record can be especially useful for
assisting with troubleshooting by a search for a previous similar problem or
if a repeat problem occurs at a later date. This log book should be started
at installation and maintained throughout the lifetime of the instrument,
together with preventive maintenance reports and any major modification.
Such a log book can also be linked to the QC results and records.
688
(f)
FIG.I.3. Weekly system uniformity image (left) from one detector of a dual head gamma
camera. The image was obtained with all corrections activated (linearity, energy, uniformity),
low energy high resolution collimator, 57Co flood source, symmetric energy window over
122 keV, 256 256 matrix, 4 million counts. On the lower left side, there is an irregular
hot semicircular area. The National Electrical Manufacturers Association (NEMA) uniformity
quantification in the useful field of view (UFOV; right image outer rectangle) confirms that
this non-uniformity is outside of specifications. The problem was a loss of gel between the
border photomultiplier tube and crystal. Once the gel was replaced, the uniformity was
restored. Note: This camera required a service. The defect affected imaging at the edge of the
field of view. This detector could, therefore, still be used for imaging within the central field of
view (CFOV; right image, inner rectangle) area, for which the NEMA differential uniformity
values were satisfactory (planar and whole body imaging, and with caution SPECT imaging).
689
APPENDIX I
FIG.I.4. Routine system uniformity images from a dual head gamma camera. The images
were obtained with all corrections activated, low energy all-purpose collimators, 57Co flood
source, symmetric energy window over 122keV, 6464 matrix, 16 Mcounts. The uniformity
was quantified with the National Electrical Manufacturers Association (NEMA) integral and
differential uniformity parameters in both the useful field of view (UFOV) and central field of
view (CFOV). The values from detector head 1 were within the expected limits. Detector head 2
shows a gross non-uniformity pattern corresponding to the photomultipliers. This pattern was
due to a failure of the electronic correction due to bad electrical contacts of the circuit boards.
After re-seating the relevant circuit boards, the problem was solved and uniformity was
restored as shown in a follow-up test (not shown here). Note: The non-uniformity of head 2
was extensive and, thus, imaging with this detector had to be suspended until the problem was
solved.
FIG.I.5. A 99mTc phosphonate bone scan obtained with a dual head gamma camera.
(a)Anterior and posterior whole body images. (b)R lateral and L lateral static images of the
left knee. (c)Routine system uniformity quality control image of detector 1 taken 2 d later. The
photomultiplier tube artefact is at the upper border of the field of view. Note: The bone scan
was reported without noticing the malfunctioning photomultiplier tube of detector 1, which was
only discovered at the following routine QC test. On review, the effect of the photomultiplier
tube artefact was not discernible in the anterior whole body scan made with detector 1 (a), but
was visible in the R lateral static of the bone scan using a colour table and high contrast that
highlighted low count areas. This example illustrates alertness to an unexpected malfunction.
This camera required a service, but could still be used with caution for planar imaging within
a limited part of the detector. Owing to the nature of the clinical bone study and the location
of the photomultiplier tube artefact, this study was not repeated. The gamma camera required
a service.
691
APPENDIX I
FIG.I.6. A clinical lung perfusion study using 99mTc macroaggegates. (a)Images of the lungs
were obtained with camera 1 (top row images: posterior, right posterior oblique, right lateral;
bottom row images: anterior, left lateral, left posterior oblique). The irregular pattern of hot
and cold areas was not at first recognized as a camera problem. After two subsequent patients
demonstrated the same patchy pattern in their lung perfusion images, the clinicians reviewing
the studies questioned the results and troubleshooting was initiated. Quality control of the
radiopharmaceutical was acceptable. A uniformity quality control test of the gamma camera
was made and this revealed gross non-uniformity ((c) left image). All patients were recalled
and re-imaged on camera 2. (b)Lung perfusion images obtained on camera 2 (with the same
image order as in (a)). Camera 1 was retuned, which restored uniformity ((c) right image).
Further investigation revealed that there had been a power disruption during the night. This
had corrupted the energy correction values, and explained the reason for the non-uniformity.
Note: A routine quality control uniformity test had not been performed at the start of the days
clinical imaging. If this had been done, the problem would have been identified immediately.
An uninterruptible power supply was later installed in order to prevent a similar future
occurrence. (For more details, see the IAEA Quality Control Atlas for Scintillation Camera
Systems.)
(h) After a problem has been solved, the instrument should be tested for correct
functioning before being released for clinical use. If computer software or
hardware has been changed, reboot and restart the system to ensure that the
system works after a power down:
(i) Be aware of any changes in hardware or software that could affect
quantitative results. Validate the results.
692
(i)
APPENDIX I
FIG.I.7. (a)Quality control images over the whole field of view of the acquisition data of a
SPECT myocardial perfusion study (left one projection image, middle sinogram over the
whole field of view (X), right linogram over the whole field of view (Y)). The images were
obtained from a 3-detector SPECT system (120 rotation per head, starting with head 1, and a
360 total rotation). The linogram shows an upwards shift in the images from head 1 towards
the finish of the 120 rotation (first third of the dataset). In order to clarify the situation, a
point source was placed off-axis and imaged with the same data acquisition parameters. (b)In
order to test the system, a SPECT acquisition was made of a point source placed off-axis.
Quality control images of this acquisition (same image order as above), and their quantitative
offset analysis (lower row). Offsets are seen in both X and Y in detector 1 data, identified
clearly by the jump in offset in both X and Y on the quantitative analysis. The problem was due
to a decrease in voltage to the signal board of detector head 1 at certain projection angles.
This was found to be due to instability in the power cable connected to the signal board.
The problem was resolved only after replacing the cable. Note: This problem was difficult to
locate. A problem was signalled in patient studies by the reporting nuclear medicine physician
who observed upwards motion in the quality control review of the patient SPECT data from
successive patients. Initially, a corrupted centre of rotation calibration was considered to be
the cause. However, the problem repeated itself, and was again recognized on subsequent
clinical and point source SPECT acquisitions. It took much persistence from the department
to keep on testing the system and several visits by the service engineer before the problem was
found. The upwards shift was only seen in one detector head, thus pointing to a problem with
the camera and not movement of the patient, which would have been seen in the acquired data
of all three heads, at the same time frames.
694
APPENDIX I
FIG.I.8. (a)A clinical 111In somatostatin receptor study obtained with a single detector gamma
camera. The upper and lower abdomen in the anterior view (top two images), and the upper
and lower abdomen in the posterior view (bottom two images). Each image of the clinical
study showed two large, diffuse, circular colder areas (indicated by arrows). (b)Uniformity
image obtained after the clinical study, which shows two large cold areas with a hot border,
each due to a defective photomultiplier tube. The non-uniformities in this example were large
enough to be recognized in the patients images at the time of imaging. The images could,
therefore, be repeated on another gamma camera system. The problem occurred intermittently,
but the fault was never localized. The camera was finally replaced. (Example 2.2.8.6 in the
IAEA Quality Control Atlas for Scintillation Camera Systems.)
APPENDIX I
FIG.I.9. Static images of the skeleton after administration of 99mTc phosphonate. Images
obtained on camera A (top left posterior, top right right anterior, bottom left right
anterior oblique) show an area of apparent decreased activity in the lower spine that is
especially evident in the posterior and right anterior oblique views. As the cold area in the
lower spine was unusual, it was not considered to indicate pathology but to be an artefact.
The posterior skeleton was, therefore, imaged on camera B, and these images show a normal
99m
Tc-phosphonate distribution in the lower spinal column. Subsequently, a uniformity
image was obtained on camera A that demonstrated a defective photomultiplier tube that
corresponded to the area of decreased activity in the skeleton images. Camera A required
servicing before further clinical images were performed. Note: The study was reviewed before
the patient left the department. If a second camera had not been available, the patient could
have been shifted so as to image the lower skeleton in another part of the camera field of view.
(Example 2.2.8.5 in the IAEA Quality Control Atlas for Scintillation Camera Systems.)
Counts
Counts
Energy (keV)
Energy (keV)
FIG.I.10. Top row: A series of routine intrinsic uniformity images obtained with
99m
Tc, uniformity correction turned off, with the energy window set symmetrically (left),
asymmetrically low (middle) and asymmetrically high (right) over the photopeak. The bottom
image is a repeat intrinsic uniformity image obtained 6 months later on the same gamma
camera. Each image was made with 5 Mcounts in a 256 256 matrix. The asymmetrical
low image shows a large diffuse rectangular hotter central area that corresponds on the
asymmetrical high image to a colder central area. Six months later, the same central colder
area is now visible on the image obtained with the symmetrical photopeak. This artefact was
caused by a separation of the light pipe from the crystal. This problem could not be fixed and
the whole detector required replacement. In this particular case, the detector replacement was
covered in the service contract. It would otherwise have been a very expensive repair. Note: The
colour scale used in these images is reversed between the first images (0100% counts=black
to white) and the image made 6 months later (0100% counts=white to black). Recording the
colour scale together with the images is essential, not only for quality control images but also
for clinical images. The colour scale and any colour enhancement should always be taken into
consideration when reviewing images.
Counts
APPENDIX I
Energy (keV)
FIG.I.11. Intrinsic uniformity images obtained at 3 months after installation of a new gamma
camera. The images were obtained with the uniformity correction turned off (but linearity
and energy corrections activated), using a 99mTc point source, 256256 matrix, 5 Mcounts
total. The left image was obtained with the 99mTc energy window set symmetrically over the
photopeak. It shows a suspicious small cold spot on the lower border. In order to investigate
this further, the energy window was offset on the lower half of the photopeak (seediagram).
The image obtained with this window setting (right) shows a distinct hot spot at the same
location as the cold spot on the left images, as well as two other small hot spots close by. This
is the result of crystal hydration. The detector can still be used at this moment in time, because
the hydrated areas are at the edge of the field of view. However, hydration will continue to
develop. The detector required replacement. In this case, the problem was discovered soon
after installation within the guarantee period, so that replacement could be made under the
guarantee. Note: If this situation is observed in an older gamma camera, a replacement
strategy for the detector must be planned. The development of hydration requires close
monitoring by weekly or monthly asymmetric uniformity images until replacement takes place.
with a 57Co source, and inadvertently the energy window not been
reset to 99mTc.
(ii) Performing an automatic peaking procedure with the patient as the
radioactive source: Owing to the large additional scatter component
in the photopeak, the window will automatically adjust too low over
the photopeak. The clinical image will, thereby, include unnecessary
scatter in the image.
In SPECT, a decrease in resolution and contrast may be related to:
(i) The imaging technique (e.g. excessively large radius of rotation, poor
choice of acquisition and reconstruction parameters).
700
Counts
Energy (keV)
FIG.I.12. Periodic intrinsic uniformity images obtained with 99mTc, the uniformity correction
turned off, and asymmetric energy windows set low (left) and high (right) over the photopeak
(each image: 5 Mcounts, 256256 matrix). The images demonstrate extreme crystal hydration
over the whole field of view: small hot spots in the low asymmetric window correspond to small
cold spots in the high asymmetric window. The asymmetric images also show some poor tuning
(especially in the top right corner). The extent of the hydration indicates that this detector
requires replacement.
APPENDIX I
FIG.I.13. Routine 6-monthly quality control test of intrinsic linearity and spatial resolution
of a small field of view gamma camera, using a slit phantom with 1cm spacing between slits,
99m
Tc point source, 1024 1024 matrix (pixel size: 0.29 mm). The acquired images were
quantified within the indicated rectangles. The spatial resolution was within specification.
However, linearity (absolute deviation: Abs Dev; maximum line deviation: Max LineDev)
was out of specifications in both the X and Y directions. The linearity correction maps
needed recalibration. NEMA: National Electrical Manufacturers Association; FWHM: full
width at half maximum; FWTM: full width at tenth maximum; UFOV: useful field of view;
CFOV: central field of view.
702
FIG.I.14. Top: Quality control images of a SPECT myocardial perfusion study using a dual
head gamma camera in a 90 configuration, 180 rotation (90 per head), 128128 matrix,
4.8mm pixel size. Images: left one projection of the acquired data, middle sinogram (X)
of a profile over the myocardium (shown in the left image), right linogram (Y) for the same
profile. There is a discontinuity between detector 1 and 2 visible on both the sinogram and
linogram. Bottom: After recalibration of the centre of rotation and head alignment (first
trouble-shooting technique applied), a test acquisition was made of a 99mTc point source placed
off-axis (left image projection). The sinogram (middle) shows no discontinuity, whereas the
linogram (right) shows both discontinuity and slope in the data particularly evident in the
quantified offset graph. The problem was a detector head tilt in both heads, which required a
service.
APPENDIX I
APPENDIX I
Physical set-up:
Collimator
200 MBq Tc point source
placed at 4 m from collimator off centered
1024 1024 matrix
10 Mcounts
99m
Point source
34 m
FIG.I.15. Acceptance testing of a medium energy collimator using a distant point source of
99m
Tc (acquisition parameters given above). The point source was positioned first to image
the right part (left image) and then the left part of the collimator (right image). There are
vertical discontinuities evident, probably from the manufacturing process. This collimator was
replaced within the guarantee period. Note: This is a sensitive method for testing a collimator
for hole angulation problems and for any suspected damage. A large distance between the
source and collimator is essential. This test supplements a high count system uniformity test.
706
FIG.I.16. Routine intrinsic uniformity image with quantification. The image was obtained
with a 99mTc point source, symmetrical energy window over 140keV, 30 Mcounts. The image
shows a hot semicircular area in the lower right field of view. The quantification indicates
that the uniformity is acceptable. However, not indicated in the results, the uniformity
calculation refers only to the central field of view. The artefact was due to a malfunctioning
photomultiplier tube. Note: This camera required a service but could continue to be used with
caution because of the lateral location of the defect. This example illustrates that it is essential
to review together both image and quantification, and to understand the parameters provided
in the results.
APPENDIX I
FIG.I.17. (a) Clinical whole body images obtained on a PET system in which the
normalization correction was corrupted, but not known at the time. The sagittal view shows
a pattern of repetitive cold horizontal stripes at consistent locations within each of the bed
positions. The periodic nature of the artefact is a sign that the problem is associated with
the system rather than this particular patient or acquisition. (b)Sagittal view of a uniformity
quality control check of the PET system acquired using a uniformly filled cylindrical phantom.
The image shows cold stripes indicative of errors in the normalization table. This quality
control image was obtained after the clinical images revealed the artefacts shown in (a).
The quality control image shows several cold streaks which indicate that the problem is most
likely a corrupted normalization file. Normalization was recalibrated before further patient
acquisitions were performed. (Courtesy of the Department of Nuclear Medicine, Monte Tabor
So Rafael, Brazil.)
failure may not contraindicate the clinical use of a PET system since modern
scanners have many detectors and the absence of one block may have little
statistical impact. Examination of the sinogram (also in clinical images) is a good
way to test for block failure, as it appears as a distinctive diagonal streak on the
sinogram.
I.4.2. Attenuation correction artefacts
AC artefacts occur when the CT AC algorithm leads to a hot or cold spot
in the attenuation corrected reconstructed data. AC effectively increases the
counts in each voxel in proportion to the total attenuation along all LORs that
pass through that voxel. When the CT image shows a highly attenuating material
in a group of voxels, then the total counts along all lines of response that pass
through those voxels are increased, and the group of voxels appears hot. This is
709
APPENDIX I
FIG.I.18. Sinogram from a PET system that has a detector block failure. The diagonal streak
that is clearly visible is the pattern created when one detector block has failed and causes
many lines of response to be zero. The failed detector block creates several blank lines of
response at every projection angle at incremental radial positions and the result is a diagonal
streak. With only one streak visible, and the fact that the streak is several pixels wide, it would
be appropriate to assume that a whole detector block has failed. The noticeable width in
the streak occurs because each detector block contains many individual detector elements.
Multiple simultaneous detector block failure is unlikely in a system which has regular quality
assurance tests. This system is still acceptable for clinical use because there are many detector
blocks in a PET system and the loss of one block results in a drop in sensitivity of only ~0.5%.
particularly noticeable where the patient has metal implants or has taken contrast
media. The attenuation of metal and contrast at the CT energy does not relate
linearly to the attenuation at annihilation photon energy. In this situation, the AC
is overestimated and a hot spot appears erroneously at the point where the metal
or contrast media is found. FigureI.19 demonstrates a contrast artefact leading
to a hot spot that appears cold on the corresponding non-AC PET image. The
non-AC images are often a key component in recognizing metal or contrast based
artefacts, but the presence of streaks in the CT image is also a warning sign.
The non-AC images should always be reviewed whenever any dubious finding is
suspected in the AC images.
Another AC artefact is due to truncation, where the CT and PET FOVs
are not the same size, so that parts of the anatomy outside the CT FOV are not
corrected for by the AC algorithm. This often occurs when the patients arms
(which are raised above the head during the acquisition) are outside the CT FOV
and a cold stripe appears across the patients head in the AC images. In Fig.I.20,
a cold stripe is prominent in the AC images but not visible in the non-AC images.
Some PET/CT systems include software that can reconstruct the truncated CT
data to increase the FOV and, thereby, reduce the severity of the artefact.
710
FIG.I.19. (a)CT attenuation corrected image of a patient showing a focal hot spot (indicated
by the arrow). (b)Non-attenuation corrected image. The hot spot is no longer visible. (c)CT
image showing a high density artefact from barium contrast pooled in the bowel. The artefact
appears to be a region of high attenuation and the reconstruction algorithm overcompensates
and creates a false hot spot. On the non-attenuation corrected image, the hot spot is entirely
gone. These high density material artefacts are very common and the user should always
examine the non-attenuation corrected image to check for the presence of such artefacts. Clues
can also be found by examination of the corresponding location on the CT. (Courtesy of the
Department of Nuclear Medicine, Memorial Sloane Kettering Cancer Center, New York.)
711
APPENDIX I
acquired far more quickly and, thus, demonstrate blurring over only a small
component of the respiratory cycle. This can create a mis-registration and
blurring in the AC PET images at the boundary of lung and liver. This kind
of artefact can have significant clinical implications when a tumour is found
near the border between the lungs and the liver. FigureI.23 shows an example
where a lesion in the liver is incorrectly located in the lungs. This is due to the
CT being acquired during full inspiration (the patient has taken a deep breath,
pushing the diaphragm down and displacing the liver caudally), as opposed to
the PET which is time averaged over regular tidal breathing, resulting in a severe
mis-registration between the functional and anatomical location of the lesion.
Conversely, Fig.I.24 shows an example where a tumour is incorrectly located in
the liver due to a respiratory motion artefact.
Respiratory motion artefacts can also be seen in the CT image itself where
the liver is not correctly rendered during reconstruction of the CT data because
the patient is breathing during acquisition. This can be seen in Fig.I.25 where
there is a characteristic artefact repeated along the axis of motion, leading to
unclear definition of organ boundaries.
I.5. IMPORTANCE OF REGULAR QUALITY CONTROL
Regular QC procedures vary between scanner vendors; however, daily
QC often requires checking the gantry status (voltages, temperatures, etc.) and
713
APPENDIX I
(a)
(b)
(c)
(d)
FIG.I.22. (a)A whole body PET/CT scan shows a point of focal uptake in the upper torso.
(b)Separate head and neck acquisition of the same patient the focal hot spot seems to have
moved to a different location. (c)Whole body fused images show good registration between
PET and CT in the bladder, spine and heart, and so it was assumed that the images were
correctly registered; however, closer inspection of the head shows a clear mis-registration
(indicated by the red arrow). (d)Fusion between PET and CT in the separate head and neck
view shows good registration. The operator did not closely inspect the head and neck portion
of the whole body view since there was a separate head and neck acquisition; however, the
patient moved during the scan, probably by rotating the head. Had there not been a separate
head and neck view, the doctor would have reported the focal uptake as metastatic cancer.
This image was reported to the staff physicist as a problem with the system; however, it was in
fact operator related. (Courtesy of the Nuclear Medicine Department, St. Vincents Hospital,
Darlinghurst, Australia.)
FIG.I.23. (a) Transaxial PET and CT images show a focal lesion, which appears to be in
the lung. (b)The CT image shows the same lesion appearing to be in the liver. (c)Coronal
view of PET and fused PET/CT where the lesion appears largely displaced from the liver. This
problem occurs because the CT acquisition is very brief and captures the liver at one point
in the respiratory cycle (in this case, full inspiration such that the diaphragm has displaced
the liver caudally), while the PET acquisition is much longer and averaged over the whole
respiratory cycle. The activity of the lesion is underestimated due to an attenuation correction
artefact. This artefact stems from the fact that lung tissue is less attenuating than liver tissue
and so the reconstruction process under-corrects for the attenuation of the signal from the
lesion when it is assumed to be in the lung. The lesion demonstrates intense uptake and so is
still highly visible despite the attenuation correction artefact.
715
APPENDIX I
FIG.I.24. (a)Coronal PET image showing an area of focal uptake that appears to be both in
the liver and in the lung, as well as another larger area that appears to be entirely in the lung.
(b)Fused coronal PET/CT image showing a mis-registration in the larger lesion. Both of these
lesions are entirely contained in the lung and the elongated appearance of the smaller lesions
is an artefact created by respiratory motion.
of this for the physicist is the necessity to regularly perform a check of the
standardized uptake value (SUV). The SUV is a quantitative parameter often
quoted in clinical PET reporting that represents the uptake of activity in a lesion
relative to background or healthy tissue (which should have an SUV = 1). The
SUV measure is highly dependent on patient preparation, scanning protocol and
reconstruction technique, and should be used with caution. It also relies heavily
on accurate scanner calibration relative to the department dose calibrator, which
allows PET data to be quantitative. Despite these difficulties, this index is often
used by physicians for indicating abnormal uptake and, in particular, monitoring
patient response during treatment by comparison of the SUV at baseline to
the SUV during or after therapy. As such, the physicist must verify that such
measures are accurate. A monthly check of the scanner SUV should be performed
using a phantom of known volume, which, if activity is homogeneous and the
scanner and dose calibrator are correctly calibrated, should produce an SUV = 1.
Erroneously low SUVs may indicate that the physicist needs to recalibrate the
dose calibrator and PET scanner, through the calculation of a new calibration
factor.
716
FIG.I.25. (a)A sagittal image shows a step-like artefact in the liver. (b)The same step-like
artefact is seen on the coronal views. This problem is caused by the patient breathing during
the CT acquisition and distorting the size of the liver. These artefacts are very common and can
be compensated for by using a breath-hold technique.
717
APPENDIX I
BIBLIOGRAPHY
BUSEMANN SOKOLE, E., PACHCNSKA, A., BRITTEN, A., Acceptance testing for
nuclear medicine instrumentation, Eur. J. Nucl. Med. Mol. Imaging 37 (2010) 672681.
BUSEMANN SOKOLE, E., et al., Routine quality control recommendations for nuclear
medicine instrumentation, Eur. J. Nucl. Med. Mol. Imaging 37 (2010) 662671.
INTERNATIONAL ATOMIC ENERGY AGENCY (Vienna)
Handbook on Care, Handling and Protection of Nuclear Medicine Instruments (1997).
IAEA Quality Control Atlas for Scintillation Camera Systems (2003).
Quality Assurance for SPECT Systems, IAEA Human Health Series No. 6 (2009).
Quality Assurance for PET and PET/CT Systems, IAEA Human Health Series No. 1 (2009).
PET/CT Atlas on Quality Control and Image Artefacts, IAEA Human Health Series No. 27
(2014).
718
Appendix II
RADIONUCLIDES OF INTEREST IN DIAGNOSTIC
AND THERAPEUTIC NUCLEAR MEDICINE
Energy of
emissiona
(MeV)
Abundance
(in same order
as emissions)
Radionuclide
Half-life
Principal
radiations
emitted
11
20.5min
+ ()
0.39
100%
13
9.97min
+ ()
0.49
100%
15
122.2 s
+ ()
0.74
100%
18
109.8min
+ ()
0.24
96.9%
32
14.2 d
0.695
100%
51
27.7 d
0.32
9%
57
271.7 d
0.122
86%
62
9.7min
64
12.8 h
(),
0.28 ( )
0.19 ()
17% (+)
39% ()
67
62 h
0.91 (1)
0.93 (2)
0.184 (3)
0.121 (1)
0.154 (2)
0.189 (3)
7% (1)
16% (2)
49% (3)
56% (1)
23% (2)
20% (3)
67
78 h
0.093
0.184
0.300
0.394
38%
24%
16%
4%
68
68min
+ ()
0.74
88%
81m
13.1 s
0.191
100%
82
75 s
+ ()
1.4
96%
0.585
100%
0.897 (max.)
0.909
22.3%
99%
C
N
O
F
P
Cr
Co
Cu
Cu
Cu
Ga
Ga
Kr
Rb
89
50.5 d
89
78.4 h
+ ()
Sr
Zr
2.93 (max.)
97%
719
APPENDIX II
Energy of
emissiona
(MeV)
Abundance
(in same order
as emissions)
Radionuclide
Half-life
Principal
radiations
emitted
90
64 h
0.93
100%
99m
Tc
361.2min
0.1405
89%
111
In
67.4 h
0.172
0.247
89%
94%
123
13 h
0.159
97%
124
4.2 d
+ (),
0.97 (+1)
0.69 (+2)
0.603 (1)
1.69 (2)
11% (+1)
12% (+2)
62% (1)
0% (2)
125
60.2 d
0.036
7%
131
8.04 d
0.19 ()
0.364 (1)
0.637 (2)
90% ()
83% (1)
7% (2)
131
9.7 d
0.353
100%
I
I
I
I
Cs
133
5.25 d
0.10 ( 1)
0.081 (1)
100% (1)
37% (1)
153
1.95 d
0.20 (1)
0.23 (2)
0.27 (3)
0.070 (1)
0.103 (2)
32% (1)
50% (2)
18% (3)
5% (1)
28% (2)
166
26.8 h
0.70 (1)
0.65 (2)
50% (1)
49% (2)
169
9.4 d
0.10
100%
177
6.73 d
0.15 (1)
0.12 (2)
79% (1)
9% (2)
186
3.78 d
1.07 (1)
0.93 (2)
0.140 ()
74% (1)
21% (2)
9% ()
188
17.0 h
0.79 ()
0.73 (2)
70% (1)
26% (2)
198
2.7 d
0.32 ()
0.40 ()
99% ()
96% ()
Xe
Sm
Ho
Er
Lu
Re
Re
Au
720
Energy of
emissiona
(MeV)
Abundance
(in same order
as emissions)
Radionuclide
Half-life
Principal
radiations
emitted
201
Tl
73 h
, X
0.167 ()
0.070 (X1)
0.080 (X2)
8% ()
74% (X1)
20% (X2)
211
At
7.14 h
5.868
42%b
212
1.01 h
a,
6.1 ()
2.25 ()
34% ()
55% ()
213
45.6min
5.55.9 ()
100% ()
223
11.4 d
5.55.7 ()
0.154 ()
97% ()c
6% ()
Bi
Bi
Ra
721
ABBREVIATIONS
AAPM American Association of Physicists in Medicine
AC
attenuation correction; alternating current
ACR American College of Radiology
ADC
analogue to digital converter
ADT
admission, discharge, transfer
AFOV
axial field of view
ALARA
as low as reasonably achievable
ANSTO Australian Nuclear Science and Technology Organisation
APD
avalanche photodiode
ASCII American Standard Code for Information Interchange
BED
biological effective dose
BGO
bismuth germanate
-methyl-p-iodophenylpentadecanoic acid
BMIPP
BMP bitmap
BSS Basic Safety Standards
CDR
collimatordetector response
CDF
cumulative density function
CERN European Organization for Nuclear Research
CF
calibration factor
CFD
constant fraction discriminator
CFOV
central field of view
CIE International Commission on Illumination
CLUT
colour lookup table
CMOS
complementary metal oxide semiconductor
CMS
colour management system
CMYK
cyan, magenta, yellow, key (black)
COTS
commercial off the shelf
counts per minute
cpm
counts per second
cps
CR
contrast ratio
CRT
cathode ray tube
CT
computed tomography
CZT
cadmium zinc telluride
DC
DCT
DDL
direct current
discrete cosine transform
digital driving level
723
ABBREVIATIONS
DIB
device independent bitmap
DICOM Digital Imaging and Communications in Medicine
DMSA
dimercaptosuccinic acid
DNA
deoxyribonucleic acid
dpi
dots per inch
DPM
disintegrations perminute
DRL
diagnostic reference level
DSB
double strand break
DTPA
diethylenetriaminepentaacetic acid
DVH
dosevolume histogram
EANM European Association of Nuclear Medicine
EBRT
external beam radiotherapy
ECG electrocardiogram
EDV
end diastolic volume
EGF
epidermal growth factor
EGS
electron gamma shower
eh electronhole
EM
expectation maximization
ERPF
effective renal plasma flow
ESV
end systolic volume
FBP
filtered back projection
FDA Food and Drug Administration
FDG fluorodeoxyglucose
FFT
fast Fourier transform
FORE
final rebinning algorithm
FOV
field of view
FWHM
full width at half maximum
FWTM
full width at tenth maximum
GFR
GIF
GPU
GSDF
GSO
HMPAO
hexamethlypropyleneamine oxime
HPMT
hybrid photomultiplier tube
HTML Hypertext Markup Language
HU Hounsfield unit
724
ABBREVIATIONS
HVL
half-value layer
LCD
liquid crystal display
LET
linear energy transfer
LOR
line of response
LQ linearquadratic
LR
luminance ratio
LS
least squares
LSF
line spread function
LSO
lutetium oxyorthosilicate
LUT
lookup table
LZW LempelZivWelch
macroaggregate of albumin
MAA
MAG3 mercaptoacetyltriglycine
maximum a posteriori
MAP
Monte Carlo N-particle transport code
MCNP
microchannel plate
MCP
mean free path
MFP
MIBG metaiodobenzylguanidine
MIBI methoxyisobutylisonitrile
MIP
maximum intensity projection
medical internal radiation dose
MIRD
MLEM
maximum-likelihood expectation-maximization
MR
magnetic resonance
magnetic resonance imaging
MRI
medical record number
MRN
mean square error
MSE
725
ABBREVIATIONS
MTF
MUGA
MWPC
NEA
NECR
NEMA
NET
NHEJ
NIST
NPL
NTCP
NURBS
OER
OSEM
PACS
picture archiving and communication system
para-amino hippurate
PAH
Pan American Health Organization
PAHO
profile connection space
PCS
Portable Document Format
PDF
perceived dynamic range
PDR
positron emission tomography
PET
photomultiplier tube
PMT
POPOP para-phenylene-phenyloxazole
point spread function
PSF
point source response function
PSRF
plasma volume
PV
PVE
partial volume effect
QA
QC
QMS
quality assurance
quality control
quality management system
RAMDAC
RAMLA
RBE
RC
RGB
RIS
726
ABBREVIATIONS
RIT radioimmunotherapy
ROI
region of interest
RPO
radiation protection officer
RPP
radiation protection programme
RSS Real Simple Syndication
SI International System of Units
SiPM
silicon photomultiplier tube
SNR
signal to noise ratio
SPECT
single photon emission computed tomography
SPR
scatter to primary ratio
SSB
single strand break
SUV
standardized uptake value
TCP
TDC
TEW
TIFF
TOF
TLG
TVL
UFOV
useful field of view
UNSCEAR United Nations Scientific Committee on the Effects of Atomic
Radiation
UPS
uninterruptable power supply
UTF Unicode Transformation Format
World Health Organization
WHO
weighted least squares
WLS
WYSIWYG what you see is what you get
XML Extensible Markup Language
727
SYMBOLS
Roman symbols
year (unit of time)
acceleration; area; specific activity
a
time integrated activity coefficient
A
ampere (SI unit of current)
atomic mass number
A
ngstrm (unit of distance: 1 = 1010 m)
A
cumulated activity
A activity
a
a
b
Bq
c
C
C
C
cd
Ci
speed of light
capacity; coulomb (SI unit of charge)
degree Celsius (unit of temperature)
activity concentration; counts
luminous intensity: candela
curie (unit of activity: 1 Ci = 3.7 1010 Bq)
d
d
D
SYMBOLS
F
farad (SI unit of capacitance)
F Fano factor; force
G
Gy
gravitational constant
gray (SI unit of dose)
h
h
H p
HT
HT()
Hz
j
J
current density
joule (SI unit of energy)
K
kelvin (SI unit of thermodynamic temperature)
K kerma
kg
kilogram (SI unit of mass)
l length
L
litre (unit of volume)
L
luminance
m
metre (SI unit of length)
m mass
ma
atomic mass in atomic mass units u
me
electron rest mass; positron rest mass
m n
neutron rest mass
m p
proton rest mass
M
nuclear mass
mol
amount of substance: mole
N
newton (SI unit of force)
N
number of counts; number of neutrons in an atom
Na
number of atoms
NA Avogadros number
Nel
number of electrons
Nph
number of photons
730
SYMBOLS
p
P
Pa
momentum; probability
power; pressure
pascal (SI unit of pressure)
r
R
R
RE
R0
s
s
scol
srad
stot
s 2
S
Sv
t time
tesla (SI unit of magnetic field strength)
T
T temperature
T1/2 half-life
u
V
V
w
wR
wT
W
W
x1/2
x1/10
width
radiation weighting factor
tissue weighting factor
watt (SI unit of power)
mean energy to produce an information carrier; weight
half-value layer
tenth-value layer
x
xe
731
SYMBOLS
xt
true mean
X exposure
z
Z
Zeff
specific energy
atomic number
effective atomic number
Greek symbols
alpha particle
linear radiosensitivity constant
beta particle
quadratic radiosensitivity coefficient
gamma ray
air kerma rate constant
0
T
ab
m
tr
a
e
0
photon frequency
electronic neutrino
electronic antineutrino
e
e
732
SYMBOLS
mass density
velocity
733
Bailey, D.L.
Bergmann, H.
Busemann Sokole, E.
Carlsson, S.T.
Dale, R.G.
Daube-Witherspoon, M.E.
Dauer, L.T.
Demirkaya, O.
Du, Yong
El Fakhri, G.
Flux, G.
Forwood, N.J.
Frey, E.C.
Hindorf, C.
Humm, J.L.
Kesner, A.L.
Le Heron, J.C.
Lodge, M.A.
735
Consultants Meetings
Vienna, Austria: 15 September 2008, 1416 April 2009, 1719 May 2010, 711 March 2011
736
No. 23
ORDERING LOCALLY
In the following countries, IAEA priced publications may be purchased from the sources listed below or
from major local booksellers.
Orders for unpriced publications should be made directly to the IAEA. The contact details are given at
the end of this list.
AUSTRALIA
DA Information Services
648 Whitehorse Road, Mitcham, VIC 3132, AUSTRALIA
Telephone: +61 3 9210 7777 Fax: +61 3 9210 7788
Email: books@dadirect.com.au Web site: http://www.dadirect.com.au
BELGIUM
Jean de Lannoy
Avenue du Roi 202, 1190 Brussels, BELGIUM
Telephone: +32 2 5384 308 Fax: +32 2 5380 841
Email: jean.de.lannoy@euronet.be Web site: http://www.jean-de-lannoy.be
CANADA
CZECH REPUBLIC
FINLAND
Akateeminen Kirjakauppa
PO Box 128 (Keskuskatu 1), 00101 Helsinki, FINLAND
Telephone: +358 9 121 41 Fax: +358 9 121 4450
Email: akatilaus@akateeminen.com Web site: http://www.akateeminen.com
FRANCE
Form-Edit
5 rue Janssen, PO Box 25, 75921 Paris CEDEX, FRANCE
Telephone: +33 1 42 01 49 49 Fax: +33 1 42 01 90 90
Email: fabien.boucard@formedit.fr Web site: http://www.formedit.fr
Lavoisier SAS
14 rue de Provigny, 94236 Cachan CEDEX, FRANCE
Telephone: +33 1 47 40 67 00 Fax: +33 1 47 40 67 02
Email: livres@lavoisier.fr Web site: http://www.lavoisier.fr
LAppel du livre
99 rue de Charonne, 75011 Paris, FRANCE
Telephone: +33 1 43 07 50 80 Fax: +33 1 43 07 50 80
Email: livres@appeldulivre.fr Web site: http://www.appeldulivre.fr
GERMANY
HUNGARY
INDIA
Allied Publishers
1st Floor, Dubash House, 15, J.N. Heredi Marg, Ballard Estate, Mumbai 400001, INDIA
Telephone: +91 22 2261 7926/27 Fax: +91 22 2261 7928
Email: alliedpl@vsnl.com Web site: http://www.alliedpublishers.com
Bookwell
3/79 Nirankari, Delhi 110009, INDIA
Telephone: +91 11 2760 1283/4536
Email: bkwell@nde.vsnl.net.in Web site: http://www.bookwellindia.com
ITALY
JAPAN
NETHERLANDS
SLOVENIA
Cankarjeva Zalozba dd
Kopitarjeva 2, 1515 Ljubljana, SLOVENIA
Telephone: +386 1 432 31 44 Fax: +386 1 230 14 35
Email: import.books@cankarjeva-z.si Web site: http://www.mladinska.com/cankarjeva_zalozba
SPAIN
UNITED KINGDOM
Bernan Associates
4501 Forbes Blvd., Suite 200, Lanham, MD 20706-4391, USA
Telephone: +1 800 865 3457 Fax: +1 800 865 3450
Email: orders@bernan.com Web site: http://www.bernan.com
Renouf Publishing Co. Ltd.
812 Proctor Avenue, Ogdensburg, NY 13669, USA
Telephone: +1 888 551 7470 Fax: +1 888 551 7471
Email: orders@renoufbooks.com Web site: http://www.renoufbooks.com
United Nations
Orders for both priced and unpriced publications may be addressed directly to:
IAEA Publishing Section, Marketing and Sales Unit, International Atomic Energy Agency
Vienna International Centre, PO Box 100, 1400 Vienna, Austria
Telephone: +43 1 2600 22529 or 22488 Fax: +43 1 2600 29302
Email: sales.publications@iaea.org Web site: http://www.iaea.org/books
13-21541