tI 11
IIlilIUN, 1 DE
AFIT/GCS/ENG/91D17 DEC 2 7 1991
C
THESIS
Rob W. Parrott
Captain, USAF
AFIT/GCS/ENG/91D17
9119010
II I~IIIIIIII 91 12 24 0
Form Approved
REPORT DOCUMENTATION PAGE OMB No. 07040188
Public reporting ouroen for this clIlection or nformation is estimated to average I hour per respoonse. including the time for reviewing instructions. searcning existing data sources
gathering and maintaining the data needed, and completing and reviewing the ollection of information Send comments regarcing thi burden estimate or any other aspect of this
collection oftinforaaOn, nclud ng suggestions for reducing thus Ourden. :o Washington HeadQuarters Services. Directorate for information Operations and Reoorts, 1215 Jeffers.n
Davis Highway, Suite 1204, Arfirgton, i A 122024302, and to the Office of Management and Budget. Paperwork Reduction Project (0704.0188), Washington, DC 20503
1. AGENCY USE ONLY (Leave blank) I2. REPORT DATE 3. REPORT TYPE AND DATES COVERED
. .... ....... ,December 1991 . MastersThesis.. ... ......
6. AUTHOR(S)
Rob W. Parrott, Capt, USAF
THESIS
F.,
Page
ist of Figures .. .. .. .. ... ... ... ... ... ... .. ... .... vii
List of Tfables. .. .. .. .. ... ... .. ... ... ... ... ... .... xi
Acknowledgments................ . . ....
.. .....
. .... .... xi
Ab~stract .. .. .. .. .. ... ... .. ... ... ... ... ... ....... xiii
11. 3D Medical Imaging .. .. .. .. ... ... ... ... ... ..... 21
2.1 3D) Data Representation .. .. .. .. ... ... ... 9_9
ii
Page
III. Kriging Theory 31
3. a kg o . . . . . . . . . . ., . . ., ., . . ,. . . . . . 31
3.1 Background............................... 31
iii
Page
5.1.3 Neighborhood size differs, Subdivision factor =
2, local drift assumption differs ........ 56
5.2 Medical image slice interpolation ................ 512
A6
A.2.2
read header info ("Top Level" DFD. Bubble 2)
A7
A.2.3
allocate memory storage ("Top LevelF )FD, Bub
ble 3)
A8
i'"
Page
A.2.4
process slices ("Top Level" DFD, Bubble ,,
A8
A.2.5
march between slices ('"process slices" DFD, Bub
ble 4.6)
A10
A.2.6
write box ("Top Level~ DFD, Bubble 5)
A12
A.2.7
output geometry file ("Top Level" DFD, Bubble
6)
A12
A.3 Data Dictionary For Data Flow Diagri ms ........ .... A12
V
Page
Appendix C. Cell Subdivision Implementation. .. .. .. .. ...... C1
Bibliography. . ... .. .. .. .. ... ... ... ... ... ... ...... BIB I
Vita .. .. .. .. .. ... ... ... ... ... .. ... ... ... .... VITAi
vi1
List of Figures
Figure Page
4.1. Computational cell (cube) marching between data slices ..... 43
1.2. Subdivided Major Cell ..................... 14
4.3. Major cell centered within surrounding cube ................ 45
4.4. Example of fitting points to a curved line in 2D .............. 47
.1.5. Drift model %ariogram ....... ........................ 412
vii
Figure Page
5.3. Cell subdivision, factor 2, with t :cubic interpolation estimating
minorvoxel values and marching cubes extraction of hyperboloid
surface from minislices ....... ....................... 57
5.4. Cell subdivision, factor 2. with kriging estimating minorvoxel val
ues and marching cubes extraction of hyperboloid surface from mini
slices. Kriging uses a neighborhood of 64 sample values and assumes
no local drift .................................... 58
5.5. Subdivision factor 3. Upper left, vanilla marching cubes. Up
per right, trilinear interpolation, Lower left, tricubic interpolation.
Lower right, kriging with 64 neighborhood, no drift .......... 510
5.6. Subdivision factor 4. Upper left, vanilla marching cubes. Up
per right, trilinear interpolation. Lower left, tricubic interpolation.
Lower right, kriging with 64 neighborhood, no drift ........... 511
5.7. Subdivision factor 5. Upper left, vanilla marching cubes. Up
per right, trilinear interpolation. Lower left, tricubic interpolation.
Lower right, kriging with 64 neighborhood, no drift ........ .... 512
.5.8. Kriging neighborhoods, nh32, nhl6x, nhl6y and nhl6z ...... .... 513
5.9. Kriging estimation, subdivision factor 2, neighborhood 32. Upper
left, no drift. Upper right., local linear drift. Lower left, image dif
ference of upper right from upper left. Lower right., image difference
of upper right image from f2tricubic2 ........... I....... 514
5.10. Kriging estimation, subdivision factor 2, neighborhood 16x. Upper
left, no drift. Upper right, local linear drift. Lower left. image dif
ference of upper right from upper left. Lower right, image difference
of upper right image from f2trilinear2. . .. .......... 515
5.11. Kriging estimation. subdivision factor 2. neighborhood 16y. Upper
left. no drift. l pper right, local linear drift. Lower left., image dif
ference of upper right from tipper left. Lower right, image difference
of upper right image from f2tricubic2. .................. 516
5.12. Kriging estimation, subdivision factor 2, neighborhood 16z. Upper
left, no drift.. Upper right, local linear drift. Lower left, image dif
ference of upper right from upper left. Lower right, image difference
of upper right image from f2trilinear2 ..................... 517
viii
Figure Page
5.13. Kriging estimation, subdivision factor 2, neighborhood 8. Upper
left, no drift. Upper right, local linear drift. Lower left, image dif
ference of upper right from upper left,. Lower right, image difference
of upper right image from f2trilinear2 ..................... 518
5.14. Dog heart CT medical image slice interpolat.ion. Create new slice
between slices 41 and 42. Window titles depict type of estimation
performed ....... ............................... 522
5.15. Baby head, MRI medical image slice interpolation. Create new slice
between slices 32 and 33. Window titles depict type of estimation
performed ....... ............................... 523
5.16. Baby skin,isovalue 43, Vanilla Marching Cubes ............. 523
5.17. Baby skin,isovalue 43, Cell Subdivision, Trilinear Interpolation 524
5.18. Baby skin,isovalue 43, Cell Subdivision, Tricubic Interpolation 524
5.19. Baby skin,isovalue 43, Cell Subdivision, Kriging ... ......... 525
5.20. Baby skin,Difference image between images produced by trilinear
interpolation and kriging ....... ...................... 525
B.1. Computational cell (cube) marching between data slices ..... B6
B.2. Example of cell mapping to a unique case ................. B8
B.3. Correspondence of interpolation arravs to computational cells marcli
ing between two slices I..................... BII
B.4. Vanilla marching cubes example boundary cases in surface normal
estimation ....................................... B14
ix
Figure nage
D.I1. Alternate polygonization for case 6. .. .. .. .. ... ... ..... D3
D.2. Example of alternate surface representation caused by cell subdivi
sion .. .. .. ... ... .... ... ... ... .... ....... D.5
xi
Acknowledgm.ents
My first and ultimate thanks go to my Lord and Saviour, Jesus Christ. Through
Him I found the strength and confidence to keep going when I thought I wasn't ca
pable. I thank God, His Holy Spirit, and His Son for being with me and my family.
With Them in our lives and Their eternal perspective in our hearts, we grew stronger
in our faith during this time. Thank you Lord for making this a learning and at times
an enjoyable experience for me and one that I will cherish for the rest of my life.
Special thanks go to my family. Thank you Verla for supporting me through
all the times I wasn't with you, especially weekends (I know. what's a weekend?).
I'm very grateful for your strength and patience. Misty, Alex, and Wesley  I'm
sorry you had a parttime Dad for the last 18 months, but I'm back fulltime and
we're a family once again!
I would like to thank my advisor. Col Martin Stytz,, who gave me invaluable
support and guidance in the area of 3D medical imaging and technical writing. His
previous papers and dissertation greatly aided in my understanding of 3D medical
imaging. Thank you Col Stytz for your constant encouragement and positive outlook.
These qualities helped keep me motivated throughout the research.
Thanks also go to Col Phil Amburn and Maj David Robinson. who both pro
vided me with crucial technical guidance. Thsnk you Col Ainburn for first recom
mending the use of marching cubes and kriging in this research. Thank you also for
your help with GPR, tricubic interpolation, an(l numerous other areas where your
keen insight pushed me in the right direction. Thank you Maj Robinson for your
assistance in the area of kriging. With Your deep understanding of this complex
estimation technique, you helped me a great deal to see how it could be applied in
3D imaging.
Thanks go to Capt Chris Brodkin for sharing his kriging code with me. \Vith
out, this. my work would have been much more difficult.
Finally. Yd like to thank the fellow students who listened to me when I needed
feedback. This act of kindness was a tremendous aid when code was seemingly
debugproof or concepts just didn't seem clear. Thanks for your comradeship  Pat
Rizzuto. Patiy Brightbill. Don Duckett. and Wayne Mcgee.
Nii
AFiT/GCS/ENG/91 D17
Abstract
Estimation has not received enough attention in 3D medical imaging. Estima
tion is often done in 3D medical imaging to increase data resolution for enhanced
renditions. It is also used for correcting inaccurate surface formations in the well
known marching cubes algorithm. Accurate estimations are vital because clinical as
sessment is often aided by examination of 3D medical images. This thesis 'introduces
the geostatistical estimation technique called kriging to the field of 3D imaging.
Kriging theory claims to be the optimal estimator  better than the standard deter
ministic methods commonly used.
This thesis explores four estimation techniques for use in 3D medical imaging.
The techniques are linear interpolation, trilinear interpolation, tricubic interpolation,
and kriging. The interpolation methods are standard estimation techniques used
in 3D imaging. The estimation techniques are used to estimate scalar values in
two primary areas. These are intracell scalar value estimation and the volume
preprocessing operation of slice interpolation. This research investigates intracell
scalar value estimation in a surface extraction method called cell subdivision. This
research also explores slice interpolation by estimating scalar values between existing
medical data slices. Slice interpolation is the operation of estimating logical slices
between existing ones, typically to increase data. resolution to obtain a finer mesh
representation of a surface.
Tricubic interpolation is shown to be most useful in artificially created volumes
of smooth functions. It, is also shown to produce poor results in medical volumes
and in slice interpolation. More importantly, this research demonstrates that kriging
subsumes the deterministic methods investigated and can estimate much better than
tricubic interpolation.
Xii,
Evaluation of Scalar Value Estimation Techniques For 3D Medical
Imaging
I. Introduction
1I
1. Background
Three dimensional data visualization encompasses all methods that render im
ages from volume data sets. Medical imaging and scientific visualization are two
broad application areas under 3D data visualization. The need for accurate rep
resentations of the physical volume separates medical imaging and many scientific
visualization areas from other 3D computer graphics techniques. The latter tech
niques mostly attempt "... to form realistic images from scene descriptions which
may, or may not, have a physical counterpart" (41:2). Accuracy in medical images
is important because many clinical applications rely on the faithful representation
of the portions of the human body scanned and reproduced threedimensionally.
...the radiologist's arena, and deals with issues concerning the statistical
significance of collected data. patient dosage, medical imaging modality
operation, development of new techniiques/modal ities for gathering data,
and 2D image reconstruction.
The main data, collection scanning technologies (or modalities) include Computed
Tomography (CT), Magnetic Resonance Imaging (MRI). Single Photon Emission
Computerized Tomography (SPECT). and Positron Emission Tomography (PET)
(41) and (39). The main purpose of each of these modalities is to sample data in
patient space and produce a series of 2D images corresponding to some aspect of
the patient such as soft and bony tissue in the case of CT andI MRI studies and
metabolic measurements in the case of PET and SPECT studies. CT and MRI
are used primarily for imaging anatomical structures. whereas SPECT and PET are
used mainly for biochemical imaging. For each modality. different properties of the
human body are imaged, but all produce a series of gray scale 2D images. Gray scale
means values range from 0 to 255 (8 bits per value). For the purposes of this thesis,
it is enough to say the values correspond to some material prol)erty" of the imaged
volume, such as density.
12
CT was the first modality, folloNN ing the ad,t of Xray technology, to provide
high quality 2D noninvasive images. The others tollowed soon thereafter to provide
images of equal (if not. better) quality. Since this research deals with 3D data display,
reference Stytz for aa overview of how each of these modalities are constructed and
how their operation influences image quality. (39).
These 2D images alone provide a certain level of noninvasive assistance, but
with these images the clinician must mentally build 3D objects of interest. This may
be very difficult, depending on the experience of the analyzer and the application
area. In one area of clinical study, according to Wojcik and Harris (45:197),
A four year study that assessed 3D imaging from CT scans at. the Department of
Diagnostic Radiology, University of Manchester concluded that "3D imaging does
have a useful role to play in a number of specific clinical situation when used in
conjunction with CT and other radiological imaging methods" (45:103144).
The main challenge faced by 3D data display graphics reseairchers is how to
create clinically useful accurate 3D images quickly from the inmeise amount of in
formation obtained from these imaging modalities. One modality can produce up to
35 megabytes of data for a single patient (41). This occurs becav, e a typical study
consists of 12 to 100 or more slices (scans) of images with resolutions up to 512 X 512.
Following the development of CT in the early 1970's. many software algorithms were
created to produce 3D images from CT data. The methods conceptually combine the
2D CT data into a volume of information from which significant. data is portrayed.
Most other 3D visualization methods (such as scientists and engineers studying com
putation fluid dynamics, molecular modelling oi lhe eart h sciences (:36)) have their
roots in early medical imaging techniques (45:60). These methods generally fall into
two main categories  volume and surface methods.
Most 3D imaging researchers use the terms volume rendering and surface ren
dering to classify rendering of volume (3D) data sets. Surface rendering methods
I3
extract a surface or surface boundary from tie volume data set and represent it using
some data siructure. Volume rendering directly processes voxels (volume elements),
assuming that each voxel is either opaque or partially opaque. This classification
can be misleading in some cases. because it implies that surface rendering only ren
ders surfaces and volume renderers only render volumes. Yet, surface renderers can
render volumes (multisurfaces using opaque and transparent surfaces) as well as
just single surfaces. Also, volume renderers can display distinct surfaces much like
surface renderers by processing binary volumes (45). A binary volume is formed by
processing only those voxe. forming part of a surface of interest. Then only the
the selected voxels are rendered. It is termed binary because voxels are classified as
contributing to a structure of interest (assigned a 1) or not (assigned a 0).
I prefe," Farrell's terminology (19) of surface unit based and volume mnit based
approaches, that differentiates not by what the final image consists of, but rather by
the type of unit or "primitive image element" used in the visualization process. For
example, twodimensional primitive image units, such as planar polygons or voxel
faces, are used in the surface unit based appr jaches, whereas volume elements (vox
els) is one type of threedimensional primitiv image un., used in volume unit based
approaches (see chapter two for a more con plete discussion of these approaches).
Hereafter in this document. the terminology , urface method and volume method cor
responds to surfa, e unit based and volume unit based approaches. respectively.
Surface and volume methods have their supporters in both the scientific and
medical communities. The choice of method depends on the application and often on
the personal preference of the researcher. scientist, eoigineer, radiologist. or clinician.
Many researchers (.e.g (47),(43), and (27)) have noted that a variety of approache,
used to render the same data set provide crucial clues to analyzing and unders".aiid
ing the data, whereas only one approd. 1 might leave gaps in knowledge. In the case
of medical imaging. surface methods are good f;, :i.wplaying surfaces with definite
boundaries. such a.s bony tissue in the human body. Likewise, volume approaches
are superior to surface approaches for depir ,ag fuzzy or amorphous volumes such
as diffused tumors and blood flow in arteries and veins (33), (16) and (45). Also,
surface methods allow for one of the most useful medical imaging applications 
interacgive manipulation of structures (45). One purpose of this task is surgery re
hearsal. Surgery rehearsai is not very practical with images rendered via volume
methods because they typically are too slow for interaction. Surface methods allow
14
realtime manipulation because thy normally produce. geometric primitives such is
planar polygons. These geometric primitives ca. be used as input into fast hardware
or software based hidden surface removal and shading algorithms. However, if a new
surface is desired for rendering, the entire volume must be reprocessed again. Vol
ume methods require a ,igh computatir, .' ,st for processing all the data elements
in the volume; however, the increase,'( ^", ,aybe less important than the need for
a diffused or fuzzy rendition of th. . .a.
The third medical imaging proce:'. data analysis, involves obtaining "quan
titative information about the structur . ,ntb:, scene" (45:57) Quantitative infor
mation includes the average density of " Earea v ;thin the 3D image, the size of
certain anatomical structure. uct as bone.; and biaod vessels, or the volume of a
subregion. Much of the desired analytic information can be acquired directly fror'l
the 2D slices. Measurements reqL.: d from 3D images are volumes, 3D distances, 3D
angle measurements, and "other less commonly used measurements... [such as] cen
ter of mass, moment of inertia, and surface (urvature." Surface methods as well as
certain volume methods (which allow the concept of a structure) provide the ability
to perform all the 3D data analysis operations (45:58).
Image accuracy is critical to the data znalysis process. Inaccu'acies in an image
can adversely affect these measurementc. possibly leading to erroneous assessments
by the chinician.
1.2 Problem
Scalar value estimation is very common inmany 3D imaging algorithms; how
ever, little has been done to investigate different scalar value estimation teclniques.
The level of inaccuracy has not. been deemed serious b'.cause cost. has been a
larger concern. As Herman and Liu noted in 1979, "... reduction in cost is essential;
computer time for the display will bave to be borne by the patient." Cost is certainly
still an issue (45:224). Higher order deterministic functions such as quadratic or cu
bic polynomials and statistical based estimation methods are more computationally
expensive than simple linear interpolation. Yet. the results obtained from using them
can greatly improve image accuracy and fidelity  thus improing clinical assessment
and quantitative analysis. Since work itations are becoming more p.rl, this issue
of cost will become less of a problem; hence. a search for more accurate estimation
meto(s is necessary.
15
Accuracy in medical imaging is very important to the medical community.
Wojcik and Hari; note, "the primary purpose of radiolcgists is to provide the most
accurate diagnostic information possible within the capabilities of available imaging
mrdalities" (45:196). This estimation problem inay cause serious 3D image errors in
two primary ares:
.idupa and Herman s~ate, "the interpolation problem in our opinion, ias
received less attention than it deserves" (45:13). The interpol ion problem they are
referring to is the process of J,.tcrmining new slices between existing image slices to
form a cube shaped volume. Udupa and Herman discuss the primary methods of
interpolating new slices are nea,'est neighbor,linear interpolation of voxel values, and
trilinear interpolation. The nearest iieighbor approach produces the worst results of
the three because it does no estimation. This approach simply assigns new voxel
values to the iuerpolated slices to the nearest original voxel. Trilinear performs
somewhat better than linear, but the variation in both cases is still assumed to
be linear when in fact it may not be. Udupa and Herman investigated one other
method they created for use i 'amary volumes. This method is termed shape based
interpolation. Voxels aie mapped from the binary volume to a seconm: , orrespording
array that holds distances from each voxel to the boundary. If the voxel in ti,c boinary
volume was 1, the distance in the second corresponding array will be poitive, else
negative. Linear, trilinear, or some other interpolation scheme is used to derive new
d&tances in the second array. The newly interpolated distances are then mapped to a
ne"," binary volume consisting of new slices betwetia existing ones. Positive distances
are it:,Signed! a 1 in the binary volume, negative distances a 0. Hence. the boundary
of the structure of interest b)rtwee . i9 and 1 entries in the binary volume influences
the int.erpola'ion. The authors claim it. provides more accurate quantitative analysis
and in their opinion leads to a better surface representation in surface methods.
However, this method will not work in volume methods unless the volume melod
uses a binary volume.
IA cell is a logical cube with vertices formed by four voxels in one data slice and four voxels in
an adjacent slice.
16
Many 3D rendering algorithms (primarily surface methods) require cubic shaped
voxels (28), (44), (22), and (45). Voxels are volume elements, the 3D analog of pix
els. Cube shaped voxels are termed cuberilles. Other rendering algorithms, including
volume methods such as (33:34) , interpolate new slices to improve image quality.
For example, assume a study consists of a series of 100 256 pixel X 256 pixel CT
images. To make this volume cubic in shape, new slices are created in between exist
ing ones until there are 256 total sli,, s. The extra 156 slices are usually determined
by linear interpolation of the sampled CT values between the original shces. The
problem with this method is the data may not vary linearly.
Intracell scalar values are estimated to improve imagP fidelity and correct pos
sibly inaccurate surface renditions generatd.l by cell interpolation surface methods2 .
Wilhelms and Gelder (50) have shown that the commonly used trilinear interpolation
estimation method does not etimate values in artificially created volumes (versus
scanner generated) as accurately as  parametric cubic function. They demonstrated
that tricubic interpolation prod'ices better images than those produced by trilinear
interpolation.
The problems with linear, trilinear, and tricubic interpolations are that these
methods assume the variability of the data and assT a. a neighborhood of sample
values that influence the estimation. However, variation is not necessarily linear or
cubic in nature and the number of saml)le values that .hould influence the estimation
can be different than the unchangeable number determined by these methods. As
sumptions about the variation of the data and the ncghlbohood of sample values can
possibly produce erroneous results in the estimation process. Instead of assuming a
variation, a geostatistical process exists that can aid in determining the variation of
the data for the purpose of estimating new values. In thib same process, the number
of sample values influencing the estimation can be modified to fit. the variability of
the data. This process is termed kriging.
1.3 Purpose
The purpose of this research is to investigate the application of several esti
wation techniques to estimating scalar values within conputational cells and during
tWe volume preprocessing operation of slice interpolation.
2
Cell interpolation surface methods are described in chapter two.
I7
To accomplish the purpose of this thesis, four estimation techniques were irn
plemented. These are linear interpolation, trilinear interpolation, tricubic interpo
lation, and kriging. This is the first time the geostatistical estimation technique
called kriging has been applied in 3D medical imaging.
1.4 Approach
The main goal of this research is to compare the results of different techniques
for estimating scalar values. Estimated values are used in two areas  within compu
tational cells and for creating logical slices during slice interpolation. To accomplish
this goal, the following tasks were accomplished :
" Artificial volume data sets were built for comparing and contrasting methods.
3 The deterministic methods are linear. trilinear. and tricubic interpolation. Determinism means
they do not account for any variation such as systematic error in sampling.
18
* A rendering system was developed by modifying an existing one (to support
software engineering goal of reusability) to display images for comparison.
* 3D Images were generated by extracting surfaces from both artificial volumes
and actual medical data sets by using trilinear interpolation, tricubic interpo
lation and kriging estimation techniques.
* 2D Images were generated by estimating scalar values between two medical
image slices using linear interpolation, tricubic interpolation, and kriging esti
mation techniques.
* Values and images derived from the different estimation techniques were com
pared.
1.5 Ovemiew
19
II. 3D Medical Imaging
21
2.1 3D Data Reprtsentation
To understand how medical images are produced, one must first become famil
iar with some of the basic concepts of 3D imaging from 3D data sets.
Most algorithms developed for rendering 3D data sets expect the data to be
regular shaped, or in a 3D lattice or grid format. A lattice or grid in this context
is best visualized as a framework of parallel planes in space, with data information
(such as material density) conceptually located at the intersection of the planes
or within the volume of space between planes. In many cases transformations are
applied to the discretized data to remove sampling noise, to alter the resolution of
the data. or to make an irregular data set regular (45:414).
3D medical data is regular because it is derived from conceptually stacking 2D
same resolution scans. e.g., from CT or MRI output. The third dimension i, then
realized as the stack or slice number (see figure 2.1). Regular shaped data is popular
because it call be vasily [napped directly into a 31) array. Once in the array. di
.) .2
is ordered for manipulation. One method of ordering is simply to view elemtnts
in the array as volume elements, or voxels. Voxels are conceptually the equalsized
parallelepipeds created by the intersections of three sets of parallel planes, each set
orthogonal to the other two. The voxel and cuberille data models have been widely
used in imaging agorithms. The use of these data models is discussed in the section
titled 3D Medical Imaging Methods.
3D medical imaging methods are typically divided into three main approaches:
contour, surface and volume. The techniques differ in the dimensionality of the
geometric data model used to create the image  ID contours (ID units) for the
23
medical image
data
SCENE scene
SPACE transformation
Structure
extraction
OBJECT ] structure
SPACE transformation
I geometric
transformation
IMAGE H geometric
SPACE transformation
I projective
transformations
[ VIEW
SPACE
image
processing
I analysis
PARAMETER
SPACE
24
contour approaches, 2D polygons (2D units) for the surface methods, and voxels and
computational cells (3D units) for the volume methods (See figure 2.3). Only those
3D Medical
Imaging Approaches
Tiling ISurface
Tracking Cell Interpolation
medical imaging topics that most directly relate to my work are discussed in this
section. Rendering (projective transformations) is a major 3D imaging task, but is
not discussed at length here because this research emphasizes estimation of scalar
values in one form of "structure extraction" and the scene transformation opera
tion of slice interpolation. (45) and (41) discuss the different 3D medical imaging
rendering processes.
25
The remainder of this section provides a very brief overview of the contour and
volume methods and discusses the surface methods in more detail. For a complete
discussion of these different approaches see (19) and (41).
2.3.2 Volume Methods These techniques are volume oriented because they
use 3D volume units as display primitives. Volume techniques differ from surface
and contour methods by the amount of data that must be stored and processed
during image computation. Additionally, volume methods typically preserve data
continuity between surface boundaries, whereas surface and contour methods do
not.
Surface rendering algorithms assume data consists of objects with thin surfaces
in a, volume of air (i.e., surfaces are easy to find, with little noise), whereas in reality
most objects have fuzzy (thick) borders and there is much more than just thin
air between surface boundaries. Volume methods preserve fuzzy borders and inter
surface material by avoiding simple classification schemes that assign binary values
to voxels indicating the voxel is in or out of the image to be rendered (33). The
phenomenon that a voxel may contain more than one scalar value is termed a partial
volume artifact. Volume methods allow percentages of different scalar values as
well as colorattenuated light and /or transparency to be assigned to a single voxel
26
(16). Each voxel contributes to the final image based on these percentages, thus
reducing the effect of partial volume artifacts. The final color of a pixel becomes the
contribution of all voxel values lying along a ray's line of sight or along a projection
path. Colors are weighted by transparencies and attenuation. Since every voxel
contributes to the final image in this way, volume methods capture transitional areas
between surface boundaries that might otherwise be missed by surface methods.
2.3.3 Suiface Methods Surface methods attempt to reduce the volume of data
to surface boundaries by depicting these boundaries by common graphics primitives
such as polygons, patches, or points (2D units), Surface methods process a small
number of slices at a time, hence they had dominated imaging algorithms for many
years since computing power was not sufficient enough, until recently, to handle the
entire volume of data at once. These techniques consist of tiling, surface tracking
methods, and cell interpolation approaches.
27
Tiling
Tiling (tessellation) methods take as input contoc.s created from any of the
contour approaches. The next step usually filters the contour data by smoothing or
resampling and then polyhedra such as triangles or quadrilaterals connect adjacent
contours. Smoothing is done to better approximate the curve nature of contours.
One heuristicbased method (42) uses Bsplines to approximate a closer fitting con
tour, so contour smoothing is unnecessary. The following paragraph descri" s some
of the tiling techniques developed in the past.
Tiling methods fall into two general classes, optimal based and heuristic based.
The optimal based solutions (23) and (30) apply graph theoretic methods to derive
what the authors consider the optimal triangulation between two adjacent planar
contours. The major disadvantage of the optimal tiling methods is the long search
time required to find the best triangulation. However, the process is entirely auto
matic. Since speed is an issue in medical imaging, many other methods were devel
oped based on hueristics to achieve a faster tessellation, adding interactive assistance
if needed for ambiguous cases (5),(42), and (24).
One major advantage of tiling is that it produces conventional graphics geo
metric primitives that can be rendered by applying standard reflection and shading
techniques. In addition. as with contour methods, rapid viewpoint, changes are pos
sible and the data size in the final imaged data set can be quite small compared to
the original volume of data.
Gross inaccuracies in an image can occur if contours are not wellformed with
respect to each other. For example, this can happen when more than one contour
is formed on a scan plane to represent a surface. The most wellknown remedy
is interactive editing, although it. is time consuming and still errorprone. Even
without interactive editing, automatic edge tracking to find the contours can be
too slow for most applications. Speed of contour formation is proportional to the
number of structures in the data set. Another major disadvantage is that tiling
based on contours results in loss of essential information because contours do not
contain enough gradient data to represent. the actual surface.
28
Surface Tracking
Surface tracking methods generate a surface as a set of cuberille faces. Recall
from chapter one that a cuberille is a dissection of 3D space into equal size cubes by
three orthogonal sets of equally spaced parallel planes. This is a natural t.:tension
of the 2D space dissection forming quadrilles, or "squareshaped pixels" (4:34). The
output primitives are planar polygons formed by connected cube faces approximating
a surface of interest.
The first step usually accomplished in surface tracking methods is to modify
the volume to form cubes. This is done by interpolating values in the components
needed (19:327)., For example, Artzy et al., (1:19), Artzy (2:6), Udupa (44:220221)
linearly interpolated in one dimension so their input data would have an interslice
distance equal to the resolution of the original 2D slices. This works as long as the
resolution of the original 2D slices is square. If not, interpolation in two or three
dimensions might be necessary.
Prior to rendering the cube faces, two tasks must be accomplished. First, the
voxels must be segmented into those being in the object(s) of interest (1voxels) or
out (0voxels). This binary classification forms a 3D binary volume (45:49). Next,
the surface boundary(s) located between 1 and 0voxels must be located and display
elements connected. The term surface tracking is derived from this process of locating
the surface boundary from a binary volume.
Artzy et, al. (1) developed a surface tracking algorithm that reduces the chal
lenge of finding connected voxels representing the surface boundary to a graph traver
sal challenge. Voxels are first segmented using binary classification. Next, boundary
detection is accomplished. Nodes of a directed graph, C, then correspond to voxel
faces separating the object under interest, from all else in the scene. The authors
prove that connected subgraphs of G correspond directly to surfaces of connected
components of the object. TO find the surface boundary, a subgraph of a digraph is
traversed.
29
Cell Interpolation Approaches
Cell interpolation methods generaLe polygonal elements by analyzing compu
tational cells. Recall from chapter one that a fom utational cell is a parallelepiped
such that four cell vertices are voxels in one slice and the other four are voxels in an
adjacent slice  see figure 2.4, The major difference !etween these methods and the
cuberille methods is that the cell interpolation methods analyze how the data varies
between voxels to determine where the surface lies versus assuming only constant or
linear variation.
F e . uicomputational
There are currently four types of cell interpolation approaches. These algo
rithms are the marching cubes method developed by Lorenson and Cline (34). the soft
objects method by Wvvll and McPheeters (52), the "gradientconsistency heuris
tics" by Wilhelms and Gelder (50). and the cell subdivision techniques also described
by Wilhelns and Gelder (50).
Each method follows a two step process. First, the volume of data is segmented
by classifying each voxel as eil her I or 0 (or in the case of the soft objects method as
210
hot or cold). A voxel is . f its scalar value is greater than an isovalue (threshold),
else it is assigned a 0. The second step is to determine if a cell has both 1 and 0
vertices and if so generate polygons within the cell to approximate the portion of the
isosurface that passes through the cell. An isosurface is formed by connecting all the
polygonal elements to form a 3D mesh, such that the surface inter,cts approximately
the same (iso) scalar value or range throughout the sample data. Polygon vertices
(not to be confused with cell vertices) are determined by linearly interpolating 3D
coordinates between the 1 and 0voxels of a cell. The coordinates are interpolated
to the isovalue. The four methods differ by how the polygons are formed within the
cells.
The term ambiguous cell must be defined before discussing the four cell in
terpolation methods. A cell is termed ambiguous if more than one topology can be
chosen for it. A topology is the polygon formation within a cell. Durst (18) noticed
holes can result from the marching cubes algorithm described by Lorenson and Cline.
Holes can be caused by improperly forming polygons between two ambiguous cell
faces.
The term ambiguous cell was defined by Wilhelms and Gelder (50). An am
biguous cell face is defined as "a cell face that contains a diagonally opposite pair
of positive vertices [1voxels] and a diagonally opposite pair of negative vertices [0
voxels]" (see figure 2.5, obtained from (50)). By looking at figures 2.6, 2.7, and
2.8. the reader can see that the a,,biguous cases are 3, 6, 9, 12, 13, and 14.
211
+
I S
 4

Marching Cubes
The marching cubes Li :rithm was developed to alleviate the necd of using
cubic voxels, i.e., 3D data " with reduced resolution in one di,,ension" (7:345).
Binary classification of cube vertices creates a total of 28 = 256 possible cell vertex
classifications. Py analY . the geometry of the different cases, the total number of
unique cases can be red:,ced to only 15 (see figures 2.6, 2.7, and 2.8). The other
241 cases are reduced to the 15 by symmetry and appropriate rotations. Wilhelms
and Gelder (50) call this approach the major case table looLup iiiethod because a 256
element table must be preset to indicate the transformation of cases to the unique
15. The signs at. the cell's eight vertices are then used as an index into the major case
table. Once the appropriate classified cube case is determined, another preset table
ent, indicates the triangle formation. The triangle formation is somewhat arbitrary
since intersection points (points approximating where the isosurface intersects a cell
edge) can be connected in many different ways for most of the 15 cases. Only certain
connections make sense with most of the cases. However. there are six cases that can
212
cause serious inaccuracies if improperly connected (which is discussed in the next
section).
213
6 7
2 73
1 Case I
Case 0
case 2
case 3 case 4
case 6
case 5
case 7
case 8 case 9
215
case 10 case I1I
case 12
case 13 case 14
2IC
c
C C
CC
case 6 case 6
Front facial average
Front facial average greater than threshold
less than threshold, so connect hot vertices
don't connect hot verices
Soft Objects
Wyvill and McPheeter's soft objects method is almost identical to the marching
cubes algorithm., The actual 3D scalar field is fabricated rather than obtained from
scanners, but the data representation is still a 3D regular data set. Key points are
specified to reduce the data set. Additional points are estimated as needed by a
cubic function that uses a radius of influence to determine the key points needed in
the estimation function. The most significant difference between this method and
marching cubes is that this method uses a dynamic. simple technique to polygonize
an ambiguous cell.
The soft oojects method forms polygon vertices on an ambiguous cell face by
analyzing the four cell face vertices. The method assumes the value at the center of
the face is approximated by averaging the four vertex values. Then, if the averaged
value is greater than the isovalue, the positive vertices are connected. For example,
see figure 2.9. L, his way, two cells sharing ambiguous faces are always consistently
polygonized  thus no holes result.
217
This facial averaging method provides only a rough estimate of center face
values. A different form of estimation may be required to obtain a more accurate
center value estimate. Even if the estimate is not veLy accurate, Wilhelms and Gelder
claim the facial averaging method guarantees continuity, i.e., no holes appear in the
image. This assertion is based on their facial plane principle :
They refer to the (possibly nonplanar) polygon generated within a cell as a topo
logical polygon because it specifies "the topology of the isosurface within the cell."
This assertion makes intuitive sense because the same facial average is calculated for
shared cell faces. This results in shared vertices between cells for that face.
218
Gradient Consistency Hueristics
Two assumptions are made in this method. First. the centerpointing gradients
are assumed to approximate the deriivatives at each corner. Second, the quadratic
functions are assumed to exactly fit the two endpoints. The gradient assumption is
typically used for determining the normals for shading as well. The second assump
tion is made in many estimation methods to insure the estimation process returns
the correct value at known points.
This method only analyzes the face corner sample values, similar to the soft,
objects method. It. should choose the topology better than the soft objects method
only if the tuderlying scalar field function is quadratic along both face diagonals.,
The quadratic fit method is similar to the centerpointing gradient method,
except here the center face value is estimated by a single bivariate quadratic func
tion. The least squares error fit uses all four face corner sample values to estimate
parameters of the function. This method chooses a more correct topology than the
centerpoitaing gradient if the underlying scalar field function is quadratic across the
entire face.
219
The gradient consistency heuristics make minor improvements to the soft ob
jects method of resolving ambiguous cells. A disadvantage of the heuristics methods
is that they assume the scalar field function is locally quadratic. If wrong, this as
sumption can generate inaccurate topologies within cells. The only way to knowv if
the topology is inaccurate is to know the local variation of the scalar value function.
However, obtaining this knowledge might be too computationally costly to be of any
benefit.
220
Cell Subdivision Techniques
Cell subdivision techniques are implemented for two primary reasons. The first
is to increase image fidelity. The second is to resolve ambiguity of cells. Image fidelity
can be increased because the data resolution is increased. Cell subdivision is the same
method employed by Cline and Lorensen's (8) dividing cubes algorithm. In dividing
cubes, cubes (cells) are divided until the resolution of point primitives is reached.
In cell subdivision, instead of subdividing cells to create point primitives, cells are
subdivided to create subcells. These subcells are then treated as the previous
original cells were, i.e., the isosurface is represented within them by triangles. These
subcells are much smaller than the original ones, hence data resolution has increased.
Because of this increased data resolution, the surface can be approximated closer to
the actual surface by the smaller triangles. However, the quality of the final image
depends on how the scalar values are estimated at the subcell vertices. Therefore,
the significant challenge with any subdivision technique is how to estimate values at
the subcell vertices.
According to Wilhelms and Gelder, another way to resolve ambiguity in cell
cases is to create subcells within original cells and polygonize the subcells as in the
soft objects method. New cell vertex scalar values are estimated by a resampling
fumiction. In \'Villhelms and Gelder's implementation, they subdivide each computa
tional cell and apply an estimation function to determine new scalar values at. the
points derived from the subdivision. The two estimation functions they use a.re tri
linear and tricubic. The major case table lookup method (marching cubes) is used
to process the subcells, and ambiguous cases are handled as in the facial averaging
technique. Tricubic is more coinputationally intensive than trilinear. but it produces
more accurate iages for their artificially created volumes than any of the other cell
interpolation methods. Image quality improves because the tricubic method consid
es sampIle (lat a in a large neighborhood (the surrounding 6,4 voxel values) and does
not assume linear variation. On the other hand, trilinear only analyzes the eight cell
vertex values and assunes a linear variation along each of the three axes.
221
Note that all nonempty1 cells must be subdivided to ensure continuity between
faces. If not, one large undivided nonempty cell face may share a face with several
subcell faces.
Besides just resolving ambiguous cases, cell subdivision is also a good method
to use if a smoother looking image is desired. However, depending on the subdivision
size, the resulting number of polygons may be very large. A typical set of from 50 to
100 brain MRI or CT slices can result in over 500,000 polygons for certain isovalues.
Subdividing by a factor of just two could increase this to over two million polygons.
In this case it might be more beneficial to use Cline and Lorense,'s Dividing Cubes
algorithm (8).
'Nonempty means that a cl.sification of the vertices derive unique cell case 0 in figure 2.6.
That is. a nonemnpty cell is one such that the surface is determined to not pas.' through it
"2"2"
Surface Methods Conclusion
Both the cuberillebased methods and the cell interpolation methods produce
images that appear to represent the surface of interest. However, the most important
issue is not appearance but rather the accuracy of the methods. An argument could
be made that the preprocessing step of creating cuberilles produces more accurate
results because of the increased resolution. However, new slices are normally created
by simple linear interpolation, whereas a higher order interpolation might be more
accurate. Also, once the cuberilles are formed, constant variation within cells is
assumed during surface formation.
223
include voxels and cuberilles. Next, the common 3D medical imaging transforma
tions were summarized. The remainder of the chapter described the 3D medical
imaging methods  contour, volume and surface. Surface methods were covered in
great detail because surface methods were iiplemented to accomplish the purpose
of this thesis. The first primary surface method discussed is surface tracking. Since
these techniques use the cuberille data model, surface tracking methods usually es
timate new slices between existing slices to form cuberilles. Although a surface
tracking method was not implemented iii this research, the task of slice interpola
tion was accomplished. Finilly, cell interpolation surface methods were discussed.
These include the marching cubes algorithm, the soft objects method, the gradient
consistency heuristics, and the cell subdivision techniques.
The focus of the next chapter is to explain how an estimation function can be
derived based on the statistics of the underlying scalar field data.
221
III. Kriging Theory
3.1 Background
Krige developed the basic theory, but ,. French engineer named Georges Math
eron and his colleagues developed the rigorous mathematical theory of kriging (35:602)
and (9:625). Prior to kriging. estimation methods used in geostatistics made several
simplifying and usually Invalid assumptions. The most erroneous assumption was
that variances between data samples is constant. This assumption made the other
estimation methods very errorprone. Krige pointed out that to get a more accurate
estimate, variances between the prospective blocks and the core samples must be
taken into account. Kriging is primarily used in situations where there is expected
to be some dependence between data measurements at different locations. Its use in
discovei ing deposits in various mining pursilts is well known and documented; see
(13:7071).,
Kriging is a process that derives a geost.atistic. Geostatistics differ from
classical statistics in the variables tied. Recall that a statistic is a function of random
variables. A geostatistic is a func ion of regionalized variables. Random variables
model chaotic processes. Regionalized variables model spatially dependent. natural
phenomena. According to Matheron (35). regionalized variables are characterized
by three qualities. The first is localization. Regionalized variables are localized
within a Support. A support is the volume of a sam)ple. consisting of geometric
31
size, shape, and orientation. In the geosciences, an example support is a drill core.
The second quality of regionalized variables is that they may exhibit continuity
within the region of a support. Stadsticai continuity means the sample values do
not deviate significantly from each other. Thus, they are not random in nature,
but show some kind of order. The third characteristic of regionalized variables is
anisotropies, discussed later in this chapter.
Kriging is a modified form of a multiple linear regression model with parame
ters estimated by a technique similar to the method of least squares (37) and (15).
Kriging uses weighting functions based on distance to coml)ute the desired data
value. The method operates on the assumption that data points closer to the target
should be weighted heavier in the estimation calculation than those further from the
new point. For example, in figure 3.1 points 1,2, and 7 would be expected to have
more influence on the estimate than points 5 and 6. This weighting strategy has
05
03 3
0
2
0 t<( Estimate
provet to be very accurate in the field of geostatistics. Davis sums up the goal of
krigiug in t.he following sentences
32
There are an infinity of other possible combinations of weights that could
be chosen, each of which will give a different estimate and a different
estimation error. There is. however, only one combination that will give
a minitnum estimation error. It is this unique combination of weights
that kriging attempts to find.
These conditions insure optimality and insure that "no other linear combina
tion of the observations can yield estimates that have a smaller scatter around their
true values" (14:385).
Why use a linear estimator versus a nonlinear? According to Delfiner and
Delhoimme. "The major difficulty with nonlinear estimators is that they involve
parameters or characteristics that cannot be inferred from the data" (15). Even if
the sampling function is locally nonlinear, this is taken into account by the drift in
universal kriging. which is discussed later.
The basic purpose of kriging for this research is the same as that for the
estimation fia'ctions explored iw Williclms and Gelder (50)  given a flew point
within a neighborhood of known sample points and associated values, estimate a
value at the given point using a combination of known sample point values. The
kriging equation esl imates a value as a distanceweighted linear sum of known sample
points. The termn linear refers to just the linear suni. it does not indicate that the
data varies linearly. Kriging can nodel all forms of highet order data trends such
as quadratic and cubic. This capability is discussed in later sections. The next few
sections explore ho" kriging calculates data weights and show that the conditions
assune(l about the(data make kriging the optimal interpolator for data sets derived
from natiral phenomena.
3.3
3.2 Krigzng and Least Squares
Kriging primarily differs from least squares in tie type of variables used in the
linear equation. Tile variables used in least squares are assumed to be independent
random variables, whereas those in kriging are regionalized variables. Recall from
a previous definition that regionalized variables model natural phenomena. These
types of variables assume the data is localized and exhibits continuity, whereas phe
nomena modeled by random variables exhibit chaotic behavior. No simple methods
like least squares can be applied to such variables because no tractable deterministic
functions can be found that describe the complex variations in regionalized variables
(14). The next section discusses how the dependence of regionalized variables is
captured in the kriging linear regression model in tile simplest kriging case.
My discussion of kriging begins with ordinary point kriging because the other
forms are modifications of this method (ordinary and point are defined in the next
section). Tile discussion begins by stating the goal of kriging. followed by an explana
tion of the constraints on the kriging equation that produce a system of equations.
The system of equations are solved to yield the weights 'in the kriging equation.
The equations presented below were obtained from a number of different sources.
primarily (12). (14). (17). (9). (15). (13). (48). (11). (31), and (6).
In the following equation the goal is to estimate Z. the unknown value at the
known position p in the neighborhood of known points p, and known values Z,(p,).
The Z,s are the regionalized variables with the parameter being an ndinensional
point and the r,*s the weights. The weights are chosen to satisfy the following two
conditions that make 7. the B...
* E(  Z) = 121
o E(Z  Z)' minimumlir [31
where 7 is tihe value being estimated at p and Z is the actual value at point p. The
estimation error. 7,  Z, is a measure of the dissimilarity between the two variables
7 and Z. E(;7  Z)2 is Ihe nean square error and h, is the expected value or mean.
:1 I
Using the following equality from the definition of the variance, V, (37:89),
E(Z  Z)2 = V(Z  Z) + [E(2 Z)12
[3] can be rewritten as
( 11(2  Z) minimum [4]
where a' is the estimation or error variance. This is important because it points out
that even though condition [3] states minimum mean square error, it is equivalent
to minimum estimation error variance.
Next. the system of kriging equations are derived from conditions 121 and [4].
This system is similar to the set of simultaneous linear equations (normal equations)
produced in linear regression that are formed by setting the partial derivatives of the
unknown parameters to zero (21). Before the equations can be derived, the above
two conditions are expanded and changed into more quantifiable constraints.
Modifying condition [2] above is straightforward
E(Z  Z) = 0
E(2)  E(Z) = 0
Then recalling [1], substitute into the above and get as an additional constraint
EjZwjZ(p,)  E(Z) = 0
>Zv,.,(pz)  n"(p) = 0
E = [.51
where nt() is the mean or first moment. Unbiasedness in the estimate is assured by
insuring that the weights sum to 1. (13"238)
The eslimalion error variance is ((6). (13). and (9))
 = v,',(),.p)  K (p,)  E .,,K,(p,.p,)[ '
where K(m.n) is the covariance between point m and poiiit n and 1) is the n
dimensional point where the estimate is computed. Jo,,rnel derives similar esti
mation e,ror variances (29), but this equation is considered the general unbiased
linear estlimator derivable by expanding the variance of the linear combination of
regionalized variables in equation [4].
The covarianee between the variables is modelled by a function called the seni
variogram. In most cases the semivariogram is ntiknown and must be determined
35
by a process called structural analysis. For the present. assume the semivarogram is
known and is represented as ((m,u) = mn. The semivariogram represents the av
erage difference squared between the values at points m and n. The main parameter
used in semivariograms is the distance between points in and n. The semivari
ogram models the dependency of data values based on how far apart they are from
each other. The spatial distribution of regionalized variables is accounted for in
the semivariogram. The semivariogram gives the correlation between sanple values
a geometric meaning rather than a probabilistic meaning. Structural analysis and
the semivariogram are discussed in more detail in a later section. As a result of
substituting the semivariogram in place of the covariance. [6] becomes
C2
( */Z~2 ~ ip
ZLCWh 7
Now that condition [1] is more quantifiable in terms of [7]. it must be minimized
to satisfy the minimum estimation variance constraint. Since there is a constraint
[5], n unknowns and n + 1 equations would result from minimizing this system.
Therefore a Lagrangian Multiplier is added to equalize the system. Minimization
is done by taking the partial derivatives with respect to the weights an( the La
grangian Multiplier tj and setting the resulting equations equal to zero. 1 his yields
the following complete ordinary point kriging system:
It
Ec,' + 7=
I (ii= 1. 7.) (8
f
J=i
The Methodology chapter will show these equations in expanded matrix form.
Tile systemn presented above is ordinary point krigiig. Due to 'he consi raints [21
aiidl [.1]. this system will determine optimial weights to subsitule back into eq)aatio
[1]. Z is the B.L.U.E. because of the optimal weights. i.e.. no better linear estimate
can be derived. This system is only one of several types of kriging possible. Other
types and their differences from this system are discussed next.
The kriging literature describes two broad categories of kriging and three tees
of kriging. Thr categories differ ba.sed on estimation region. The tpes differ based
36i
on assumptions about the sample data. Either of the two categories can be used in
any kriging type.
3.4.1 Kriging Categories The two categories of kriging are roint and block.
Point kriging was discussed in the preceding section. It is employ, When the goal
is to estimate a value at one particular point.
Block kriging estimates a value for a region instead of at a single point. There
are two block kriging methods. The first uses point kriging repetitively to estimate
several values within the block and then averages the results to get one value. The
second method derives a new set oi equations using a modified covariance function
of the K() terms in equation [6]. The second method involves computing a double
integral that, evaluates the area of the block in question. The problem with the
second approach is finding an explicit analytic form for the integral; hence, the first
method is most often used for block kriging (1 )1.
3.4.2 Kriging Types There are three primary types of kriging discussed in
the literature  simple, ordinary and universal. These techniques differ in their
assumptions about the behavior of the expected values or means of the regionalized
variables E(Zj) (15). see figure 3.2.. Each of the three methods can be developed so
the final kriging system estimates points or blocks.
'Some authors conib)in poii and block kriging into one form in which the area integral reduces
to a point in the cast of pomt kriging (9:626)
37
Krjging Means of Regionalized Means of Regionalized
lype Variables are Known Variables are Constant
Ordinary No Yes
Universal No No
Figure 3.2. Table of Kriging Types (All three can estimate points or blocks)
Simple Kriging
This type of kriging, as the name suggests, is the simplest form of kriging
 even simpler than the above ordinary kriging system. However, this nethod is
seldom employed. The method is termed "simple" because sample means at known
locations ,tre assumed to be known prior to kriging. The means are stated as
E(Zj) = mi,i = 1,...,n and E(2) = in
where mi is the mean of the i'th regionalized variable and m. is the mean of the
estimator regionalized variable.
This assumption modifies the kriging equations derived above because the un
bias constraint changes. The ordinary equations derived above left out one variable
important to the simple kriging equations. This variable is called a shift parameter.
A (29). In ordinary kriging, A = 0. The shift parameter modifies the estimation as
n
Z=A + Zi (p,
117,
38
This assumption also simplifies the minimum error variance condition [4]. In
essence, this form of kriging reduces to classic linear regression (29) and (12). Simple
.:riging is seldom used because the means of the regionalized variables are usually
unknown.
Ordinary Kriging
Unlike simple kriging, ordinary kriging assumes the mean of each regionalized
variable is unknown. Also unlike simple kriging, ordinary kriging assumes each mean
is the same. A constant mean is more commonly referred to as a stationary mean
(9:626). The ordinary system of equations were derived above to estimate points.
39
Universal Kriging or Kriging with a Trend
When using universal kriging, the first process becomes estimation of the re
gionalized variable means at the sample points using a local neighborhood of known
sample values to determine if the means are constant from sample point to sample
point. If data has global (iegional) drift, there appears to be a definite pattern or drift
in the sample values over a larger area, usually much larger than the neighborhood
size used in the kriging system. If drift, is local, it occurs within the neighborhood.
Figure 3.3, adapted from (14) and (13). shows regional drift as a line placed through
the data poinls.
If global drift exists. the regionalized variables are now viewed as being com
posed of two parts  the drift and the residual. The drift of a regionalized variable is
its expected value (mean) at a point p, within a certain neighborhood. This experi
mental or computedi meat is called drift if it varies from point to point. The residual
is calculated by subracting the drift from the actual measurenieqIt. For example.
assume an experimental drift is calculated for every known point. call it ?n(Z(p,)).
Then the residuals are calculated as
2David shows that the experiniital drift. can be calculated by estimating the drift coefficients as
a linear combination of the a\,ailable data. This is another multiple linear regression that tq pically
assumes simple unbiased estiiation derived from a leastsquare inethod (13) The drift equation
is shown in the ntxt. paragraph.
310
Z(X)
Z(x)
311
There are three primary steps in estimating points or blocks in the presence of a
global drift. First, the drifts are estimated at each point and residuals are computed.
Then, the residuals are used as stationary regionalized variables in a simple, ordinary
or universal kriging system (depending on the local means). Lastly, the estimates
derived by kriging the residuals are added back to the drifts to get the final estimate.
If local drift exists, then universal kriging is used (13:267). The ordinary point
kriging equations developed in the last section must be modified in the presence of
local drifts to yield the system of universal kriging equations These modifications
are described below.
The kriging system of equations change in the presence of drift because the un
bias constraint changes. In ordinary kriging the unbias constraint forces the weights
to sum to I because the mean ni(u) is constant, producing equation [5]. Now that
rn(p) is no longer constant but takes into account drift from equation [9]. the unbias
constraint results in
E(2  Z) = 0
E(ZZ(pi))  Zdif(p) = 0
a k
Notice the drift coefficients di have dropped out of the constraint. Thus the
universal system is independent of the drift coefficients, but still insures unbiasedness.
Since this condition insures unbiasedness regardless of the unknown drift coefficients
di, the term universal is used to denote the system of equations that result.
This constraint (equation [10]) adds k+ 1 more equations to the minimum vari
ance condition, thus k+1 additional Lagrange Multipliers (711) are needed (9).(15).and
(17). After the partial derivatives of the equations are taken with respect to the n
weights and the k + I Lagrangian Multipliers and set to zero. the resulting universal
kriging system is
312
n k
There are several unknowns in this system of equations that must be esti
mated. The unknowns in the universal system are the order k of the polynomial
f.(p), the drift coefficients di, and the size of the neighborhood used to determine
the drift. These unknowns are determined during structural analysis. The drift co
efficients dl can be found along with the weights in the kriging system (14:394) or
by a leastsquares method (13:272). The order k is usually 1 or 2. If the means of
the regionalized variables are the same, k = 0 and equation [10] reduces to equation
[5], which is ordinary kriging. If the order is 1 this means the 0th order term or
the constant will be included as well as the first order terms. In two dimensions
this is the x and y terms. If k 'is 2, the 0th, 1st, and 2nd order terms are included
in the drift. The first order (k=l) polynomial associated with linear drift in the
neighborhood is : m(p) = do + d1 Xi + dX 2i
and the second order polynomial associated with quadratic drift is
where X1, and X 2, are the first and second coordinates of the i'th known 2D point.
in the neighborhood (14:394). As stated earlier in this chapter, any order drift can
be modelled by kriging  simply modify k in equation [12]. If a polynomial drift is
not observed during structural analysis, other types of drifts can be easily modelled
as any type of function of the geometric coordinates.
3
1)aVis (14) and David (13) present tie universal kriging equations in expanded matrix form.
3 13
The semivariograrn, drift, and neighborhood all influence each other and char
acterize the notion of localized continuity within a sample volume. The goal of
the process is to find a model semivariogram that models the spatial correlation of
sample values within a local zone of influence (neighborhood).
Structural analysis is usually performed prior to kriging. Prior to understand
ing the process of structural analysis, the semivariogram must be defined.
Semivariogram Definition
The semivariogram is a graph and/or formula (14) that
31
The semivariogram can be represented by a formula or a graph. Graphs de
pict the distance h on the abscissa and the semivariogram y(h) on the ordinate.
The experimental or sample semivariogram (graph) is computed and ploted from
the known sample points and values and is compared against known model semi
variograms to determine the best fit (closest match). After a fit is made, model
parameters are estimated. The semivariogram used in equations [81 and [11] is a
model function, not experimental.
Davis states that the following equation can be used for estimating the semi
variogram (experimental) for multiples of Ii when h is the same between data points
(in other words the data is regular)
 "
S (x x +h  (X  X h
ICY X,..h). [13]
The asterisk indicates this semivariogram is experimental or estimated from
the sample values. This expression takes into account drift in the inner second term
in the numerator,
Z(X,X.+h) [141
11
Equations [131 and [141 are in (14).
3 15
Process of Structural Analysis
The goal of structural analysis is to determine a model semivariogram. To find
this model semivariogram, an experimental semivariogram is first estimated from
the known data values and compared to known model semivariograms to find a close
match. However, before this is done, it must be determined if drift exists. The
drift expression and the experimental semivariogram change based on the size of the
neighborhood 4 . It would be best to estimate the semivariogram using the entire data
set. However, it is too costly and often does little good since a distance is usually
reached at which the affect of values on one another becomes neglible. Therefore.
a maximum distance for the neighborhood is assumed initially to determine drift
experimentally. This same distance is used in calculating the experimental semivar
iogram.
In the case of ordinary kriging, no drift exists so the experimental semivari
ogram calculated from the orginal sample values is sufficient to determine the spatial
correlation of the samples. If drift exists, the situation is more complex because the
semivariogram is not reliable statistically.
The model semivariogram must provide good statistical properties, like corre
lation between sample points based on spatial relationships. However. estimating
the semivariogram of nonstationary regionalized variable., may not have these kind
of properties. According to Davis. stationary variables (regionalized variables with
stationary means) force equation 1141 to zero. which gives equation [131 a known
statistical property  "the difference between the variance and the spatial autoco
variance for the same distance." Normalizing the variables. i.e.. mean zero and
variance 1, provides an even better statistical property  the sernivariogram becomes
the mirror image of the autocorrelation function" (14:212).
The main p~roblem in the presence of global drift is that the experimental
semivariogram is not reliable statistically. Recall that drift can exist in two forms.
local and global. Local drift is accouted for in universal kriging. Therefore. the
main task of structural analysis in this case is to compute a reliable experimental
semivariogram in the case of global drift. Since stationary regionalized variables are
considered reliable and residuals are stationary. then the rsiduals can be used to
4
The drift coefficients as well as the order of the drift polynomial may differ.
3Il i
compute the experimental semivariogram. To find the residuals, which are the drifts
subtracted from the actual values, equation [91 must be solved.
The weights in equation [91 are determined in two possible ways. First, they
can be estimated by the Lagrange Multipliers in the universal kriging system or they
can be separately estimated by a regression technique like leastsquares (13). If a
kriging system is used, a known semivariogram can be assumed at the start (a "first
approximation") to obtain estimates of the drift., If a leastsquares approach is used,
a first or secondorder polynomial is fit to the sample data to obtain estimates of
the drift coefficients. In this case the di's in equation [9] are estimated as linear
combinations of the data
(17 = 5jjZ7,
21
After estimates of Ihe drift are obtained, they are then removed from the actual
data to obtain residuals. The residuals are then used to estimate an experimental
semivariograrn. The experimental semivariogram is then compared to known models.
If a poor fit results, the "first approximation* semivariogram, the neighborhood size.
and/or even the order k of the polynomial drift equation [121 can be modified to
obtain a closer fit, This recursive process is known as structural analysis. There is
a strong interrelation between neighborhood size, drift, and semivariogram for the
residluals" (1).
Once a model fit is obtained, a kriging system is used with the model semi
variogram to obtain estimates fron the residuals. To get the final kriged estimates.
the drift estimates are added to the estimates determined by kriging the residual
regionalized variables.
In summaiy. the first step in structural analysis is to determine if global drift
exists inthe data set. This requires calculating a drift or sample mean for every
sample value in the data set. If it exists. global drift must be removed by calculating
resi(uals and then computing the experimental semivariogram from these residuals.
Once a model s%.nivariograim is chosen, values are estimated by kriging tlie residual
data set with this mnodel semivariogram. These kriged values are then added back
to the drifts calculated at the beginning of the analysis. Or. if global drift. does
not exist. the data itself is used to determine the experimental semivariogram. If
global drift exists or it does not. local drift can he assumed because if local drift does
not exist. the local drift coefficients will just be zero. Global drift is not so easily
accomnted for because it involves a much larger neighhorhood of sample values.
317
Models
Model semivariograms are used in the kriging system of equations rather than
experimental semivariograms. Experimental semivariograms are not used because
they do not provide results for distances other than those derived from the sam
ple data. The term model refers to a known. continuous semivariogram (31) and
(14:246). An experimental semivariogram like equation !131 is not used in equa
tions [8] and [..,] because the experimental semivariogram is compt:ted for known
discrete distances. Instead, a known model semivariogram computed for continuous
distances is used. Since kriging estimates new values at possibly different distances
than those between the original sample points, a continuous function is the most
reasonable choice (14:246). Some of the better known continuous semivariogram
models include the linear, spherical, and exponential (see figure 3.4).
armcce!
3! ~q
oq
,i/ :"e:      . o
Figure 3.A. Example semivariograms for linear, spheriral and exponent ional models
where a is the range. Davis suggests this approximation is good for "distances
much less than the range" (14:247).
.Spherical.lodrL The model equation is
L!2h)
Q +Co if h<a
:
i!
The nugget effect measures microscale variations. It is the position on the
1(h) axis where the semivariogram intersects, possibly causing a discontinuity at the
origin if Co # 0(35).
The parameters are the distance (h), range (a) and sill (a2). This model is
characterized by a semivariogram approaching the sill asymptotically. This indicates
the data values always influence each other regardless of distance apart; however,
values sepamated by distances beyond the range have much less influence on each
other than those values separated by distances less than the range.
:3.6 Isotropy/Anisotropy
3. 7 Chlptzr Summary
This chapter has described some of the most basic forms of kriging. Structural
analysis and kriging are complex. robust processes. It is optimal because the estima
tion variance is minimized, the estimation is unbiased. and the covariance approxi
mat ion, the semivariogram. analyzes sampie points based on their interdependence.
There are many current 3D medical imaging applicat ions in u.s today t hat can
use kriging to obtain more accurate estimates. In the next chapter. I will (lenonst rate
how kriging can )e used to estimate intracell scalar values in cell interpolation
surface extraction and the volume preprocessing operation of slice interpolatien.
'i20
IV. Cell Subdivision and Slice Interpolation Implementation
4.1 Introduction
The primary steps in the cell subdivision and surface formation algorithm are
41
2., March major cells between slices.
3. Subdivide major cells into minor cells.
4. Estimate minorvoxel values and normals.
5. Apply marching cubes surface extraction within major cells to
6. Render triangular mesh with a poiygonal based renderer. form surface.
Steps 1 and 2
These two steps are part of the marching cubes algorithm developed by Lorensen
and Cline (34)1. Four slices of data are processed at a time. The marching cubes
algorithm creates computational cells (cubes) between the two inner slices (see fig
ure 4.1) and approximates the surface within each cell. My cell subdivision algorithm
works similarly, except, surface formation is approximated in minor cells, not major
cells2 .
Steps 3 and 4
When a major cell is processed, it is subdivided based on subdivision factors in
each of the three component directions. For simplicity, assume component subdivi
sion factors are equal. A subdivision factor of two will divide a major cell into eight
minor cells (see figure 4.2). The 3D minorvoxel points are obtained by calculat
ing division points along lines between major cell vertices based on the subdivision
factors. A subdivision factor of two creates one division point along each line, a
subdivision factor of three creates two division points along each line. etc., again
assuming the same factor in all three directions.
The scalar values at, these division points are estimated by one of the three es
timation techniques  t rilinear interpolation, tricubic interpolation, or kriging. Tri
linear interpolation assumes the voxel values vary linearly within a cell in all three
directions. This method assumes only the eight major cell vertices contribute to the
estimation of intracell scalar values (minorvoxel values). Since trilinear interpola
tion 'is a standard method, details are not given here, but are in appendix H Tricul.uic
interpolation is a parametric cubic polynomial interpolation method. This method
'During this effort., I implemented the marching cube.5 algorithm described by Lorensen and
Cline (34). Details of this implementation are in appendices A and B. While developing the cell
subdivision code I reused as ninch of the marching cubes code as possible.
2 IM%implementation can also perform the marching cubes algorithm on major cells.
.12
slice73
slice 2
slice 1
slice0
voxeli~j~kvoxel i+1Ij.k
Z voxel ij+l,k
voxel i+1,j+1,Kl+1
43
minor cell
Y (2,2'2')
zy
I L
Tricubir interpolation assumes values vary cubically within a major cell. i.e..
they fit a 3D) curved surface within the major cell. Trilinear interpolation and tricubic
interpolation are termed deterministic because the procedures do not account for
error. Krigiing on the other hand estimates values based onl the statistics of thle
data, and not, only accounts for error, but minimizes it. It estimates values using
a weighted linear corrlbination of nearb~y known samle values (voxel values). Thle(
4 5
weights are determined by conditions that insure unbiased sampling and minimum
estimation error variance. The latter condition requires that covariances between
sample values be computed. These covariances are approximated by a technique
that computes the average difference squared (in distance) between data samples.
This causes sample values closer to the value being estimated to have more influence
in the estimation than sample values farther away. Therefore, kriging is really a
distance weighted estimation function. It does not assume linear,quadratic, or any
form of variation, although it can be tailored to do so. In fact, I tailor kriging to
behave like both a tricubic and a trilinear interpolator. Details of this tailoring and
tricubic interpolation are discussed in the next two major sections of this chapter.
The purposes of cell subdivision are to 1) resolve ambiguity in cells and 2)
provide a better surface approximation., Recall a cell is ambiguous if more than
one topology can be chosen to represent the surface within the cell. Cell subdivision
guarantees that major cells will be disambiguated only because they are being subdi
vided. That is, once subdivided, major cells are no longer treated computationally,
hence they are disambiguated. The minor cells are now the computational units,
The problem with cell subdivision as a disamnbiuation method is that minor cells
may still be ambiguous. If minor cells are ambiguous, Wilhelms and Gelder (50)
apply the facial averaging technique discussed in chapter two to disambiguate Jhese.
I do not (1o this here. because my goal is to investigate the use of kriging to estimate
intracell values. Besides disambiguating amlbiguous major cells, cell subdivision also
provides a better surface approximation within each major cell. This is because the
surface is now being detected at a finer sampling  although many of the values are
not original Qample.s, but rather estimated values. How well the extracted surface
corresponds to the actual surface depends mainly on the accuracy of the estimation
function employed. To better underst and how subdivision can not only disainbiguate
most, riinor cells. but also form a smoother surface, see appendix I).
There are two marching cubes implementations use(d in the cell subdivision
process. The first, is the outermost loop. In this loop. data slices are read into nenory
and major cells are formed. At this point., a vanilla marching cubes imllelnentation
can be selected 3 . If the vanilla marching cc.bes im plententation is not .;elected. cell
aVanilla indicat es no cell suhd Ision. disambignalion, or enhancenients to the original algorithm
I6
subdivision is performed. The second marching cubes implementation occurs within
each nonempty major cell. In this case, minislices formed by the subdivision are
treated as if they are actual data slices. This minor marching cubes implementation
treats minor cells as "cdbes.' Within each minor cell, triangle vertices and normals
are computed to represent the portion of the surface passing through each cell.
Details of triangle formation and normal computation are in appendices AB and C.
Step 6
The triangular mesh is rendered using Phong reflectance and Phong shading.
Details of the renderer used are in Appendix F.
A more detailed discussion of cell subdivision is in appendix C.
Pl
Estimated values x
beitween known points P1 and P2
.17
on a surface. Here we are estimating values that lie ol or near a surface in 3D.
not the points that generate the surface. Also the tangent vector constraints are
determined differently, based on the volume ,ata. The following equation represents
the formulation of the values in one dimension:
F(u)= au 3 + bu 2 + cu + d = U.,Cx = U. MH.GH = [u3 2
11 U IIMH  GH
where 0 < v < 1, MH and GH, are the Hermite basis matrix and geometry vector.
Normally, the tangent vector constraints are determined by differentiating the
U vector and solving at u = 0 and u = 1, the endpoints of the curve segment. 1,
as \Wilhelns and Gelder do, modify these constraints by assuming the derivative of
a value f, at a point i in one dimension is approximately the central difference fl =
l(f,+,  f;I). The blending functions for a univariate curve F(u) are determined
by solving a system of equations including the constraints F(0) = fo, F'(0) = ] .
F(1) = fl, and F'(1) = .fl. The solution of the system results in the following
equation for an estimated value in one dimension
2
F(u) =EZfB,(u)
1
B = 1( + 2 ,
Bo(u) =1 (3u _ 3
5u 2 + 2)
B() !(3 +4 +
2
B2 (u) = ( 113_ )
The indexing scheme (I to 2) is used to correspond to the position of a cell's
eight vertices in relation to the surrounding voxels. The point (0.0.0) is .lie bottom
front left. vertex of a major cell (see figure 4.3)'.
The tricubic function is determined by apl)lying the equation in all three corn
ponents and is given by
2 '2 2
'Although not shown in the figure, the major cell centered in the larger surrounding cube is
subdivided
18'
F(u, v. w) estimates minorvoxel values at the intracell points derived by cell
subdivision. u,v and w range between 0 and 1. u is equal to the fraction of tile
distance between the major cell vertices in the x direction. Similarly, v is the fraction
of the distance between the major cell vertices in the y direction, and i,corresponds
in like manner in the z direction.
This function constrains the surface to the computational cell because the two
endpoint values in the formulation for each dimension are cell vertex values. The
term tri stems from the three parameters. For the bicubic case, Watt (49) describes
the surface formulation as a cartesian product of two curves. In the tricubic case,
the surface formulation is the cartestian product. of three curves.
This section discusses the kriging estimation technique used to estimate minor
voxel values. First, an overview of the technique is presented. Following that, specific
implementation conditions are listed and discussed. Lastly, the kriging estimation
procedure implemented in this research is presented
The kriging code developed during this effort was written by Capt Chris Brod
kin (3) and modified for use in intracell scalar value estimation. It. is objectoriented
code written in C++.
4.4. 12 Global and Local Drift Drift is the phenomena that occurs when sample
means vary from point to point. A sample mean is derived by choosing some neigh
borhood of values surrounding a known sample point and calculating a weighted
average. if these weighted averages (sample means) I;ffer from point to point, then
drift exists. If they are the same, then the sample .jeans are constant.
Drift can occur in two ways. It can occur within lo( I regions (local drift)
and/or throughout the entire data set (global drift). If global drift exists, it is
removed by calculating residuals from he .mle means. A residual is calculated
by subracting the estimated drift from the sample value at a I nown point. Residual
data is considered to be nonst.tionary, i.e., there is no drift. Since kriging only works
with nonstationary data, residuals are then kriged to estimate values. These values
are then added to the sample means to get the final estimates.
Drift can also occur just within the neighborhood of sample values being used
to determine the linear sum. This local drift can be accounted for in the universal
kriging system of equations.
Global drift. is assumed not to exist in this implementation to simplify the
process. Local drift is incorporated into the universal system by the polynomial
form of the drift. expression. Even if global drift exists, it should not have a large
effect if local drift is accounted for.
The threedimensional local linear drift expression implemented for this effort
in terms of t he geometric coordinates T,. y,. :,is
in(p.) = in(x,,y.,Zi) = do + di x. + d2 y, + (13:,
•110
4.4.3 The Assumed Model Semivariogramnz The model semivariogram is a con
tinuous function that takes the distance between two sample points as a parameter.
It is used in the kriging system of equations as an approximation to the covariance
between sample values to give geometric meaning to the values instead of probabilis
tic meaning as in classical statistics. The covariance measures the interdependence
or correlation of sample values, whereas the semivariogram measures the spatial de
pendence of sample values based on distance. Two types of semivariograms exist.
These are the experimental and the model. The experimental sernivariogram is a
discrete function derived from the data set prior to kriging. It is computed as an
average difference squared between data points. The experimental semivariogram is
then compared to continuous, known model semivariograms to find the best match.
Model semivariograms are actually used in the kriging equations because distances
other than those found in the data set might be used to estimate new points.
This model indicates a polynomial drift exists in the data, which is certainly
the assumption made in the tricubic interpolation method discussed above. The
function used is simply
(h) = abs(h 3 )
This model semivariogram coupled with the drift expressions above parallels
the tricubic method. The tricubic method is derived from paramet:ic cubic functions.
4.4.4 Neighborhood Size Recall from chapter three that neighborhood size is
tlie number of sample values in the kriging estimation equation. Neighborhood size
is usually determined during a structural analysis of the data (structural analysis
was discussed in chapter thre). Instead of dletrmining the neighborhood by a
structural analysis of the data, I assumed different sizes. First. I assumed the same
neiglborhood size as that used in tricubic interpolation. 6.1. This neighborhood size
is the 56 surrounding points. including the cells eight points, for a total of 6 sample
values in the kriging system (see figure .1.3). The neighborhood sizes 8. 16. and 32
were also investigated for bot h cell sullivision int racell scalar value estimation and
slice interpolation. These sizes were chosen bwcause Ihey were easily implemented
1 il
h
Ii2
I
from the cell geometry. They were used to demonstrate the effect of kriging with
different neighborhood sizes.
4.4.5 Estimation Procedure The goal of kriging restated in terms of this par
ticular problem is to estimate
71
2(p) = ZWiz(m)
i=l
where the pi and Zi's are the surrounding voxel points and values and p is the 3D
point where the value Z is being estimated. n = 8, 16, 32 or 64 in this equation and
all the following ones for my implementation.
Both the ordinary and universal forms of kriging were implemented. Recall
the ordinary system is
n
,j+ n = 7P
Ef' ( ,..,
3=1
'21 122 1
... ^1,2, 2 7)2p
11 .. 1 0 j1 i
For die universal system, the equations change depending on tihe assumed local
drift, fl(p,). The system Is
n k
3=1 1=0
•1
13
n )= P(p) (1= , ,. ,k)
F , a],fPt(p
( srlVi=,.., '11 0)
= p
',IWz, (  2)
= ( = 3)
(=1=4)
(1 5
2
,
,
= . (l1= 6)
It.
Z'w,'z
E___
I
t=
, (1 = )
,.rz, =.
,_
:, (I = 9)
!/,. and :,are the como)Oients of the i'th 31D sanipie poil.
',.
I I
711 712 . 7.
Yir 1 x] I Yl Z1 X1 y7 zi Xly xz y' z 1 U,
1 11P
2
'Y21 _f22 ... 112n X2 Y2 Z2 X2 Y22 2
Z'2 X2y2 X 2 Z2 7/2 Z2 IV2 72p
1 1 1 0 0 0 0 0 0 0 0 0 0 71o 1
XI x2 .. x, 0 0 0 00 0 0 0 0 XP
7Y2... Yn 0 0 0 0 0 00 0 0 0 712 yp
z z2 ... z 0 0 0 0 0 0 0 0 0 0 713 Zp
2 .2 00 0 0 0 0 j) 0 0 7)4 T2
1I 2n
2 2 2 2
Yl Y2 . Y 0 0 0 0 0 0 0 0 0 0 ?Is z
.2 .2 .. 2 A 2AA
The A matrix is inverted to solve for the X column vector of unknown weights
and Lagrange multipliers. Recall the semivariogram 7,m is the same as
where hM,, is the distance from point m to point i. The 711's are the k + 1 La
grange multipliers where k = 9 and (x,, y,, zp) is the point where the value is being
estimated.
This section descrilbes how scalar values are estimated between medical slices
for the volume preprocessing operrt;on of slice interpolation. To accomplish this
task I reused as much of the cell subdivision code as possible. C'omlputational cells
are used it the estimation process. The estimation is dlone only along one cell edge
because dhe purpose here is to interpolate only in the 'Z' direction. Recall the '
direction is the direction tha, the data slices are stacked. Four different neighborhood
sizes were used  16, 32 and 64. The slice inte polat.ioi algorithm is
115
1. Read in four medical data slices.
2. March cell (cube) between the two inner data slices.,
3. Estimate scalar values for new slice(s) along cell edges.
4. Create gray scale image from estimates.
Four slices are needed at a time because the tricubic interpolation requires a
neighborhood size of 64. Computational cells on the boundary of the data are treated
specially. Since they do not have access to the larger neighborhood of sample values,
I compute a linear interpolation along boundary cell edges. This should have no
effect on the final image because the boundaries of images typ;cally do not contain
any significant data.
My implementation of linear interpolation is not presented because it is a
standard method. The important point about linear interpolation is that :t assumes
only a linear variation. The tricubic interpolation and kriging estimation methods
used here are the same ones described in the previous sections.
The next chapter presents the results of implementing the methods discussed
in this chapter and in the appendices. The appendices contain descriptions of the
methods used to accomplish the other tasks outlined in chapter one. yet not men
tioned in this chapter.
416
V. Results
This chapter presents the results obtained from implementing the estimation
methods discussed in the previous chapter. The results are divided into three major
areas:
" Artificial Volume  Cell interpolation surface extraction and intracell scalar
value estimation in an artificial volume.
'When I discuss image accuracy or quality, this is m% opinion about the visual appearance of
the images.
The table headings are as follows. In the pictures derived by cell subdivision,
the number of triangles and nonempty minor cells are listed. These are denoted as
column headers "# TRI" and "# NEM" respectively. All other columns discussed
are applicable to all the tables. The first is "Images". This header indicates one
particular image in a picture. The header "values compared with" indicates another
image compared with the image listed under the column "Ilmages". The header
"largest est value diff" presents the largest, estimated value difference between the
two compared images. This entry provides knowledge of the extrene deviation of
sample values derived by two different estimations. As David states, "the most
natural way to compare two values ...is to consider their difference." I actually
take the absolute value for this column. David also states the a.verage difference is
an important measurement as well to understand the dissimilarity between values
(13). This measurement is presented in the column under the heading "avg diff of
values". The average difference of the values is derived by the formula
n
Zubsltmagei value,tmage2_vuue,)
n= n is the number of values estimated,
Iwhere.
imagelvalue, is the i'th value estimated in the generation of the image under the
column "hmage" and image2_valtf, is the i'th value estimated in the generation of
the image under the column "values compared with". Finally, the last column in all
the tables indicates the average difterence percentage of the total scalar value range.
For example. the scalar value range for the [2 function in an artificial volume is 136.0.
Also, the scalar value range for the slice interpolation tests is 256. 0 through 255.,
The main purpose of this section is to show that using kriging to estimate
intracell scalar values resolves an)iguity in (ells and that kriging is very flexible
compared to the other two interpolation methods. By flexibility I mean that I can
change parameters in the process that, alter how kriging estimates. This cannot be
done with the other techniques. Both trilinear and tricubic assume the neighborhood
and local data variation. The advantage of kriging is that it can be tailored to behave
lik., either of the other two interpolation method&, or it can be tailored to analyze
any size neighborhood and assume other local data variations besides linear and
cubic. First presented are images comparing kriging to the other two interpolation
techniques. fixing kriging to behave like tricUbic. Then, both the local drift and
52
the neighborhood size are changed to show how flexible kriging is and demonstrate
how important these two factors are. Many other factors can be modified to make
kriging more robust and more accurate, but the purpose of this research is to just
demonstrate its usefulness and applicability.
An artificial volume is created by calculating scalar values at voxel points. The
artificial volumes represent continuous 3D isosurfaces derived from mathematical
functions. The details of this rocess are in an appendix. The mathematical function
used is a hyperboloid with known ambiguous cases. The function, which Wilhelms
and Gelder called F2 (50), is
2
F 2(x. y. z) = 4(y  1)2 + 2(a'  z)  2(x + z  3)2 + 1
5.1.2 Neighborhood size 64. subdivision factors differ, no local drift assumed.
Next, I ran the program with the three different estimation methods at the sub
division factors two, three, four and five; still maintaining the kriging neighborhood
size at 64 and assuming no local drift. The results can be seen in figures 5.1  5.7.
All the cell subdivision methods  trilinear and tricubic interpolation and kriging 
removed the ambiguous cells for all subdivision factors. In other words, after cell
subdivision and intracell scalar value estimation, no ambiguous cases were found.
The subdivision factor as well as the estimation function significantly alters how the
surface is represented. The higher the subdivision factor, the smoother the surface
representation. Also, tricubic interpolat ion and kriging tailored to behave like tricu
53
l1.T Miterpoi)
'Io t u )";: v1. 1ilahtValdE" L1hat iq~tc 'mui ll . ': ~'I ill iotI '
Cp.11 4 alar i dne e'!t in1Z1 loll look~ aliiiust 1(101 iral. IThis i Is !( it a oIlilarisonl of
01 , t I I,! t cdf \,I II, bet wc I I I lie( two) 1tletlleJd'. F, 'r* o\e IlI)p]', at 'Iii tI\ i:sion factor
.val des et lilatc w.1triclibi)Intterpolat ioll adl dffe'r b\~ ;III averageZ of .011:3
ramwi~ of stalm \ alw
u Is .19.0 ito 87.0. .Sov table, 5. fo let dli ced ( oruparison of the
('st imat loi (' li(IIII(i'. T1Hf IlIdjcatc." triang''le md~cNENI inc'ari Nuiiemptv Minor
ccV~
!lc'"111' arc, pnrecItc'( In tisllalee. BY ai iai hii, tabile and exanunl
tig i, illat's :t i" 'Icells that I he krigiuit! a.'silillt icon, mlodel t IcIlim Interipolation
iost'l\. Fo.r .lcIilo factrs 2. ;111( 1. 1,liv Iiiie of tI iniles a., \vell ai the
tiiiiu1bl, of Iloivflpc'u j)!:1\ lllo CvIlk ;it'( exact l 1'.I 1au1 for lilt tt ice and kriging
e."pmt.
111'I F'il. Indicate;, that theW c'ttimacd \al'r at'' \e * Sitmilar'. To
l':
firtIl lc, tilt. I cmtparIel vaiities (''ctitnlated as5 'i)wtil ini Ow tabile. noting the
Iat~c, ciff:(iet'(1 III Vxc
\alii and t'111t.c tffr'i. f.l Oit ('stIlrtiate(I Values.
ii''~"i11 ce'; t m;Ip
h e it'tr . t11.iclev l iwjiccvhod. tIC) ciift.
able Comparison of values estimated by trilinear and tricubic interpolation
f2thrilnear2 140 70
f2tricubic2 140 70 f2krige2 .136:3 .0031 .002:3
f2krige2 140 70
f2trilinear3 332 166
f2tricubic3 :348 174 f2krige3 .1211 .0069 .0050
f2krige3 318 174
f2t rilinearl 560 2S0
f2tricubic4 576 288 f2krie4 .1331 .011:3 .0083
f2krige4 576 288
f2trilinear5 888 444
f2t ricublic5 920 460 f2krige5 .1316 .0160 .0118
f2krige5 912 4.56 _
5:
Figure 5.2. Cell subdivision.factor 2. with trilinear interpolation estimating minor
voxel vaihes and nmarching cubes extraction of hyperboloid surface from
mm islices
5. 1.3 .\'tiyhhitlhnwI
dir Sbirts.oin faci
,iifo,. = 2. lochl drift t..sumip
lion dfJter.. The next set of iniages in Lhis sect ion deinonstrites tlie flexibility of
kriging (figures 5.9 .13)., In these .ets of irage. the subdlivisioin fa,or is two.
chosen arbitrarily. I altered both the neiglhorhood size and the local drift. In all
tile imnages except hose wili a neighborhood of 3"2. ihe itnages wt it) no drift are
significantly different than those witi local linear drift. It. appears dia. with smaller
neighborhoods. assuming local drift prevents the inaccuracies seen by the holes.
"51,
Figure.5.3. Cell subdivision. factor 2. with t rieubic interpolation estimating minor
voxel valhte. and marchins cubes extraction of hyperbolol'i surface from
win i Iices
s
values in the 2 sli'es above and below the computational cell. A negilborhood of
16 in the "N",iheci iol,. denuoted nli l x in Lhe figures. coisist.s of the ,ight compu
tat in. crll vorti ces anl for out pither side of the cell in the X ,li:,ection. 16 'Y'
and 16 Z neiglhlorhoods are d,.rived in a similar fashion. These particular neigh
borh)ows were chosen berauti of their direct correspondence to the :omputational
cell and Ihe indexing scherie already usefd for tricubic interpolation and kriging in a
64 twighborhooI.
uuage. delic'l aug difference,; wert created by the Utah R L libra, .tool rlecomp
witi ohi,' ofOoperator. This tool performs Ihe' logical set differei'e Ol)e:'at toll between
pixel vales it) the two inmaiges.
Table .5.2 dlrict. differences in rsihmatd values for the last ets of images.
Notie that kriging with a neighiborhoorh of :2 sample valu.sc and ,I, local drift
matche. tric tbir interpolation almost as close.ly as krigitn with a nei.hborhood of
64 sample values does. I compared local linear drift kri,ing images 1o IrilIbir and
linear , caus local linaar drift kriging prodi,,d 61etter iniage. \: soon as the
ktiginum neighborhool sie reduci's to 1,. except in the ca.e o' nhliti . the kriging
1.if
Figure .5.4. Cell subdivision, factor 2, with krigiiig estimating minorvoxel values
and marching cubes extract ion of hyperboloid surface from minislices.
Kriging uses a ne'ighborhood of 64 ,arnple values and assumnes no local
drift.
58
Table 5.2. Comparison of values estimated by different kriging forms for cell in
terpolation surface extraction of a hyperboloid surface in anl artificial
volume
J values largest avg avgf diff
I compared j est duff
Image With value of 136
______________V diff
__________ values %
f2krigen h 32nod rift f2j, :?nh32inear .0310 .0004 .0003
f2krigen h32nod rift t2tricu bic2 .5066 .0098 .00 72
f2krigenhi 6x nod rift f2krjgenh16 xlinear .1922 .0023 .0017
f2krigenllxlinear f2tirictihic2 :31.7500 1.2602 .9266
f2krigenh 16linear f2t rilinvaar2 .0646 .0006 .0004
fMkrigcnh I6ynodrift f2krigenh I 6ylinear .1617 .0022 .0016
f2krigenh 16N6liiiear f:2tricubic2 A.0790 I.0210 .01.51
f2k rigen 1i163l inar f2trnh near2) 2.0,,,0 .0118 0307
f2k rigenh I6znod rift f2krigenhi I6zli niear .1922 .0023 .0017
f2krigenh IGzlinear f2tricubic2 :31.7500 1.2602 .9266
f2krigenhl6zlinear f2trilinear2 .0646 '0006 .000.1
f2krigeiih8nod rift fMkrigenhli lnear .9313 .01 :38 .0100
f2krigenh~finear r2t.ricilmc2 .11.7500 '.26i02 .9266
f2krigenh~liuear f2t.r'invar2 .0616 .0006k001
~~ ....
... ~ 5 9
Fi~tntre tiudivision fartor :3. Cpper left. vanilla marching culpt. I pper right.
triii~ln:r itirrpolation. 1Lowfr left. rictibic Inlterpola~tion1. Lower right.
kriginla wih 64j nielibhorhoxxl. neo rift
H119rall Ihat I was att Iemptiniv to tailor krliim, to modeel iti ti bic initerpolation.
Fiowever. a ttilt:.able and iftiages indicatc, withlotit chaigig any pair.'ttrseXcept
neighbi~rl ize
i. If appears that
'nrigilig, list) l).laves likv t rilitivar int erjplat 11)1
kriving !:,.ec withIt his pa~itIicilar dlat a is morr itifliwiicerd IPY rieiStibrhood size and
local1 drift asipStjtions ratllt'r t han the sei:karioarain itiodel. *I' tet tis frterw. I
tletl~dwit l,
1".~wr seiiiva ritoggal II mdels of hi and iP*, I phbta.itifd ;ivrraqr diffkrvilces
Figue 5.6. Subldivision factor 41.Ufpper left. vanI~1a marching caaws. Upper right.
rilirar
titepol io I cwer left. :rrithic interpoiat ion. Lower riHt.
krivitug vit h 6i1 neighborhood. not ririf:
of esini tell valu:es le~is thatn .00051_. The uiitta wms romparmd if) data estiinatrd bY
the h ni,1~ . holding~ all Ot her ;ssu:mip:i it. he . Zame.
.,....f11fly 4lrdcI
1ant'.r
1uhlt tjiflailf
mat in., srwalar values in the TZ dimension. In this sectioni. I dem~on!,: rate1 lte use o
linear interpolationi. !rictic interpt$)ati;)n. and kr:ittng I.s Lit# interpo'alion methods
used ii es1; ;1ilat;. daAValues. Tritinear atnd linear are equivairtitt in thi%cawe because
I iterimiat lintl )IIoe rlitir~~iot. X. alzuu! a cell edge.
5.2.1 Dnq hrnurt. CT. The firo~nv(Oii.~ of 2V.2 N 13Q, pixel CT slices
ofa dto&*, heart. we pirit tre in figure 5.i4. *1h~e govl is to create.a new slice between
slires 111anti 12. The top fonr iniage! in IIhe pirturc are the original data slices.
The remnaining isnawes are attettilu, to estiniale a logical data live betwern lte two
'Rrraij t1j~s selr spare is 11w t~lt . rt~ed fre,, wttu nal
uvnx sm4lahtit in !letman and
Computational cell being
nhl32 
val ues A
11116X  16 samp~le
values in X nm 16y  16 samnple
directioni
zA
values inl X
0 Sample values
contributing to estimation
516i
Figure .5.12. I'riging estimal ion. subdivision factor 2. neighborhood l6z. Upper
left. no drift. U'pper right, local linear drift. Lower left. image differ
eiice of' upper righit from upper left. Lowei right. image difference of
tupptr right i luage from fMrilineair2.
517
U UJ
.5.2.2 Biubij liad. M1RI. The second 21D medical imagre st mldv (see fiagie~ 5.15
consists of 166 X 166iN1111 slices of' a I hree nioint 1hold babi's head. lnterslice thickness
is 41inni. The goal is to estimate a n~ew slice between slices :31 and :12. Thie original
slices again are Ii I lie top row and( thle secondl row shows slice, are dlerived by' linear
and trictibic interp~olationi. All others are dlifferenlt forms of kriging.
Alt hongh all the est irniaed imiages (except krigenhinodrift ) look almost iden
icaId. anl e.Nariiat ionl of (lhe estimiat ed values reveals that estiniat ions in t Iiis stuidy
are similar to those Ii the (log heart study (see table 5.4).
Thle finial Series of imlages con .:st of it :31) sin face extracted from 60(iu)I:man
:)alv\ head INIM data slices. The 2D)slc dimiensions are( 128 p~ixels X 128 piNels
and thle ,isovalue ,is 413. Thel( surface met hodls I utse are the vanilla marching cubes
(fign're 5.1(i) and1( C(]l subdivision. I .,i lhe cellI subdivision i echnique. I use a sild(i
Visiu :i fact or of 2 in ihe XZ (liiiiel~oi and I inl bothl Ihle XN anid VY. This has flhc
Table 5.3, Comparison of dog heart CT estimated values.
1 values largest avg avg diff
compared est diff 
Image with value of 256
ciff values %
krigenh6. ',,odrift
1 krigenh64linear .6229 .0196 .0077
krigenh61linear tricubic 4 1242 .1365 .0533
krigenh64.inear linear 18.0780 .6781 .2649
krigenh32nodrift krigenh321inear .9815 .0416 .0163
krigenh321inear tricubic 12.1747 .5519 .2156
krigenh321inear linear 3.2616 .1295 .0506
krigenhl6xnodrift krigenhl6xlinear 30.0089 1.3080 .5109
krigenhl6xlinear tricubic 12.4780 .5407 ,2112
krigenhl(i linear linear 2.27,19 .0785 .0307
krigenhI6ynodrift krigenh16ylinear 70.8097 .9918 .3870
krigenhl6ylinear tricubic 12.7687 .5527 .2159
kaigenhl6ylinear linear 3.0524 .1129 .0441
krigenhltznodrift krigenhli6zlinear 24 5295 .4502 .1759
krigenhl(izlinear tricubic 2.7525 .1125 .0439
krigenhl6zlinear linear 17.1736 .6742 .2634
krigenhnSodrift krigenli8linear 203.168.5 10.8014 4.2190
krigenhSlinear tricubic 11.4007 .5639 .2203
krigenh8linear linear .292:3] .0091 .0036
520
Table 5.4. Comparision of baby head MRI estimated values
values largest ax,, avg diff
compared est. diff 
linage with value of 256
diff values %
krigenh64nodrift krigenh641inear 1.8551 .0571 .0223
krigenh641inear tricubic 6.7329 .3772 .1473
krigenh641inear linear 23.0072 1.960 .5844
krigenh32nodrift krigenh321inear 3.4845 .0932 .0364
krigenh321inear tricubic 17.1510 1.0481 .4094
krigenh321inear linear 5.1335 .2807 .1097
krigenhl6xnodrift krigenhl6xlinear 96.5233 3.4776 1.3580
krigenhl6xlinear tricubic 17.3192 1.0851 .4239
krigenh 16xlinear linear 4.27:38 .2252 .0880
krigenIhl6ynodrift k rigenhl6ylinear 70.1153 2."202 .9845
krigenhl 6ylinear tricubir 17.4163 1.085:3 .4239
krigenh 16ylinear linear 5.2594 .2249 .0878
krigenhl6znodrift. krigenhl6zlinear 21.8722 1.0342 .4040
krigenh I6zlinear tricubic 4.1201 .2519 .0984
krigenh 16zlnear linear 21.4664 1.4106 .5510
krigenlinodrift krigenh8linear 227.1227 16.3677 6.3940
krigenh8linear tricubic 17.9036 1.1631 .454:3
krigenh81inear linear .7859 .0282 .0110
.521
WI Vmer S,,lw(' l ;Ili, 12. \\it'o t Ic' fiVt'jCI 'Itt ~c
e1.d 'slit ion
effectr )f pe'frIn g Ihe ' olnuIII I, ( IiC )T.li! OIMerdtill kd slice Ifll,r )d1i h Ifl. Tll(
Vi mode ''iuxdn)rl ., all be "certil I 1114 pictwvc, !htmi 5~.17. 18. and
.5. 19) iiliclrt tilit f f'rWhlhU nIl erpulat ionl inl ihe Vol illmt In~
oCC!'lIM!
0Oper1al'it n >4im ii ant ly affect I lwie ial rc' ostriIIcledlsnfc~ TrIlinear lilt erpulatioii
Owhn. are '.uglliticallI dilcrence. toi ev~imijlc. ligin 5.. 2(1 b'okv,  litv pI~vl d~icfi cet
bIt Wocll th It( idges11': gell alc h%.i
tVillo~ I I ihiita.'ijttvipot) I'mtl antd ku u\?
t intii; .:t l llt Ics. t riili 1c lilt jtdt ItII. Iulinear int i at itni alitIIr!ii±
os
pre,W~~~~~~~~~~~~~~~.
wIbdl I' ,l ~c I ;1 al ~iotdIl;
FJQUUI
,. * ,.
Bab
. il.1oail1 ".C l ili ;I n
X by 4 by 4 Z volume. The ambiguous cell cases in the volume were removed in
all cases by cell subdivision, regardless of the estimation technique employed or the
subdivision factor. These images demonstrated that kriging is very flexible. That
is, modifying the neighborhood size and assumed local drift significantly alters the
waN kriging estimates values. Larger neighborhood sizes of 64 and 32 cause kriging
to behave like tricubic interpolation, regardless of the assumed local drift.. However.
at neighborhood sizes of 16 X', 16 'Z', and 8, kriging with local linear drift behaves
almost exactly as trilinear interpolation. The use of universal kriging was essential
at these lower neighborhood sizes for correcting inaccuracies produced friom using
ordinary kriging. The best appearing images in the artificial volume occurred with
tricubic interpolation and kriging tailored to behave like tricubic interpo1 'ion.
Different results were obtained by estimating 2D medical data slices between
two existing ones. At neighborhoods of 16 'X', 16 'Y' and 8, kriging estimated more
like linear interpolation than tricubic interpolation. Again, the use of universal krig
ing was critical for correcting gross inaccuracies produced by using ordinary kriging.
The most significant result in these studies is that tricubic and kriging tailored to
behave like tricubic, estimates values poorly. Little can be done to tricubic inter
polation to fix these inaccurate estimations. It inherently assumes a cubic variation
and uses 64 sample values for an estimate. However, kriging can be modified to
overcome these' inaccuracies. I did this by i educing the neighborhood size. The best
image in the study was obtained with a neighborhood size of 8 and local !mnear drift.
This image may not represent the best est imate(] values however. The best esltimate
can only be derived by doing a structural analysis of the data to determine if the
data is isotropic. to find a possibly better model semivariogram, and t.o determine
the optimal number of sample values., Furthermore, the optimal neighborhood size
may not be 8, 16. 32 or 64.
Finally, I presented a study of images derived from 60 MRI slices reconst ructed
into a 3D surface representation. This study demonstrated that tricubic interpolatioin
inaccurately estimates intracell scalar values for the purpose of cell subdivision
surface extraction  at least for the data set I analyzed. Both trilinear and kriging.
modified to behave more like trilinear than tricubic interpolation. produced much
better appearing images.
Showing that kriging can estimate like standard deterministic methods is it
port ant. It shows that just guessing the kriging paraimeters produces results as well
526
as those already used in practice. This means that by using kriging, the estimation is
no worse than the standard deterministic methods. However, kriging theory states
that kriging will produce the best estimates if properly applied. Properly means
performing a structural analysis of the data, to determine a model semivariogram
that models the spatial correlation of sample values.
527
VI. Recommendations and Conclusion
This chapter first discusses recommended research using the 3D imaging tech
niques implemented. Fo1':wing that, future kriging research applied to 3D imaging
is suggested. Finally, a brief conclusion to the thesis is presented.
6 I
processing time. Also, utilizing a hardware renderer should decrease the processing
time even more.
The kriging estimation methods implemented in this work can be modified and
enhanced to investigate further uses in 3D imaging. The results depicted and de
scribed in this thesis indicate scalar value estimation needs to be investigated further.
Tricubic interpolation does not estimate values accurately in the medical data sets I
analyzed. This study demonstrated that kriging is very flexible and can be modified
to behave like different estimation methods, including both tricubic interpolation and
trilinear interpolation. A research effort should be conducted to determine how to
make kriging find the best estimation as the theory indicates it should. This requires
a structural analysis of the data to determine the characteristics of the regionalized
variables  support, continuity, and anisotropy. Continuity of sample values exist
in certain portions of the human body such as organ and bone. \What. needs to be
determined is how to find these zones of influence. Finding the zone of influence
determines the kriging neighborhood and the type of model semivariogram to use.
In addition to the model semivariogram. I also made assumptions about the drift.
the neighborhood, and isotropy. In some cases. such as a neighborhood of 64. the
data might be anisotropic. If assumption of isot Vopy or the model semivariogram was
wrong. fixing them according to a struct ural analysis should make kriging produce
.,optimal" estimates.
Before structural analysis can be done however, certain tools have to be built
or tnodified from existing ones. Several structural analysis lools have been built at
AFIT for use ii 2D data sets. These would have to be modified for 3D use. The
tools include one that calculates the experimental variogram. It currently e stimates
the semivariogram in only two directions (to check for anisotropy). This would have
to be modified to estimate semivariograms in other directions for 3D. A procedure
exists that deterinites para'neters for a seinivariograin model. There are only a few
62
models available in the tool. so model implementation is another area mf research as
well as modifying the existing ones for use in volume data sets.
6.3 Conclusion
This research investigated different methods for estimating scalar values within
computational cells and in the volume preprocessing operation of slice interpolat ion,
These methods include the deteiministic linear. trilinear and tricubic interpolations
and the geostatistical estimation technique, kriging. Isosurfaces were generated
by marching cubes and another (ell interpolation method called cell subdivision.
The estimation techniques were used to estimate intracell scalar values in the cell
subdivision method. They were also used to estimate logical data slices between
existing ones for the volume preprocessing operation of slice interpolation. This
research introduced kriging as an estimation technique for use in 3D imaging.
I demonstrated that kriging estimates values as accurately as deterministic
tricubic interpolation  shown to be very accurate in estimating intracell scalar
values in artificial volumes. I also showed that tricubic interpolation can perform
poorly in medical data sets and that kriging can be modified so these inaccuracies
do not occur.,
The erroneous results produced by t.ricubic and several of the kriging variat ions
could be caused bv invalid assumptions about the neighborhood influencing the es
tilnation and the variation between sample values. Neither of these factor, can be
modified in tricubic interpolation: however. they can be in kriging. I only modified
the neighborhood size and local drift assumptions. These modifications demons' ra',e
lhat kriging produces better results than tricubic interpolation. The data variat ion.
which ,is
modelled by both the semivariogram and t lie local drift, needs to be deter
inined )va t ru( tural analysis of tle data. MY goal was to demonst rate that kriging
is capable of being modified to behave like other (leterministic interpolation tech
niques and to prevent inaccuracies. Since I did not do a structural analys;: and(just
assumed the data variation, the results show that, kriging is very robust. It is robust
because even though I guessed several of the kriging parameters. I was able to make
kriging behave like three other standard deterministic functions and etter in some
cases. Following a structural analysis, kriging should provide the best, estimation in
comparison to olher known est.imation methods.
63
The abiiity to modify kriging to behave like any other deterministic method
is important. First it shows thaL kriging can be modified to behave like standard
estimation techniques. so if they are desired, kriging does no worse. That is, krig
ing subsumies the deterministic methods I investigated. Also, kriging provides the
capability to dynamically change the estimation technique. This capability could be
used interactively to adapt ively refine the estimation for different resolutions of data
and/or for different viewpoints of the rendition.
Kriging is considered the optimal estimator and since accuracy is important in
3D medical imaging, exploring the use of kriging to estimate values is worthwhile. Its
use has mainl\ been to est imate values within environments such as mining, gas and
oil exploration and other geoscience disciplines. Further research is critical to prove
the usefulness of kriging in 3D medical imaging. This research is also applicable to
any 3D imaging methods that perform estimation.
(i.
Appendix A. Vanilla Marching Oubes Data Flow Diagrams and
Program Description
A1
___~~ ~ a 42~~z
ORr 
F
WiLLJ
0 TD
~~6)
C.D CD
0 m >
< 0
 oo
0
C/)
or o CiO
KJ _
(0
LL
LiLO
CO
Cco
CE CO
uFu,.~
7L iI ,
10 LL
U))
C
CY E
* o
Q)0 0
o (I
MwC
L1~ L(J)
PV
CG
.G .sajis ssoid.. saqn,) 2uwip vl!"I wC 51'
am'ui
0 (D
C7)
C0I
UO co
F CL
COC
QU) 0)
oo cc
0m C7
 " D
C13
0 _ 0
0
0 _
C: c
0 / 0
010
0=_______
'Cl) Co
0 CF
L~ 1,lJ
06 CD
F CLl1
0  C3 0) CLI~
IFC~'(0 E Z
~~Cf)
cc
00
c3
00
00
0 ' 0) CIDJ
CD(D
Clo)
Primarv Data Structures
*plane[4]  Pointers to four arrays holding original p)oints and scalar values.
Need four in inemoiv t~o calculate normials using gradient operator.
"* ixPlane[2]  Pointers to two arrays 1hold1ing preinterpolatecl Y points for cells
lbetween two slices.
* iylIane[2)  Pointers to two arra\,,s holding lprtinterpolated 'y points for cells
b~etween two slices.
* *izPlane  Pointer to an array holding the Ipreiflterpolated "z' points for cells
between two slices.
" >'Gplane[2]  Pointers to two arra\ s holding calculated normals at cell vertices.
" gxPlane21  Pointers to two arrays holding lpreinterlatd 'x' normals for
cells between two sli'ces.
" 'gyPlane[21  Pointers to two arrays holding preinterpolatedl ) normals for
cells between two slices.
* . gZPl ane  Pointer to anl array holding preinterpolated Y normals for cells
between two slices.
* 256 case table.
* interpolatedl points translat ion ible. Indlexed by cell edge vertice.
* interpolated normals t rainslation labde. Indexed by cell edge vertices.
*isovalue array.
pfir4 comn
Clflfllim (*'7/) !.(rd wiuc
DED. ntinc ubbic 1)
ca, : shelLflag
reVad file nlaile aro unientl
readl isovalties fronm file. Einter isovalues Into all
array of isovaliies:
IA
case :widowflag
read range of isovalues:
set as first. and secondl elemnets in arrayv of iso'alues:
case : boxilag{
set box flag to true:
I
case : Iist of..geom iles lag{
set list of geom files flag to I rute:
case : jpathflag{
read path argumient.:
set path variable to argumett overriding default path:
case: datafilefiag{
read from control file to get large dlat~a file information
such as imiage dimiensions. interslice amount.Input data, path. and
number of slices.
/'else default Is to read fromn au art ificial volume data file:/
~ ~swit cli
}/* While
A. .. 4. 1
nr'id slicts ("process slices" DFD, Bubble 4.1)
A4.2.4.2
.q / dclla diqlaiiccs ('"process slicc<" DFD. Bubbh 4.2)
/(]one after reading i first, three slices. befoi e maini for loop7
__\ (Is(plafl([01 > x  (pla.ic([0 + I ) > xr):
\) oli.(plan [0] > yI  (7plo2? [0] +x.dIi)?.%i0I) > y):
A8
A . 24.3
calculate normals at cell v'ertices ("proccss slices" DFD. Blubbl( 4.3)
A. 2.4.4
pr(iflterpolalc points & normials ("procrss slices" DFD, Bubble 44
Initially calculate for cells between plane[OI anld lplane41I:
In main for loop calculate for cells between lplane[l] and lplaneI2]:
In final case, calculate for cells between plane[2] and plane[31:
Method :Use same comparison as in marching cubes for loop 
A .20.4.5
uipdate translation lookutp table (proc..s slices", DED. B7ubblf 4.5)
Prior to marching cubes (cells) bet ween two data slices. reest ablishl
p~oinlters in tianslation lookup tables;
/* These pointers must be reestablishied because the original dlata
slice lpoin~ters and the normal slice p~ointers are swappedl after every
new slike Is readl in from thle data file. Thel translation tables are
indexed bY thle cell edges and thle e'ntries cont ain p)o int rs to thle first
cell' dg.es. WVhen thle tables are uised, all offset froml the bt4mbii ii g of
the arraYS Is added to tilie pointersN to access thle correcto cvL
For examiple tl~v[5][7] =izPlaii + xdimcn.5ion + I establishes tile
absolute cell address for the Interp~olated point ol'cell edge 57.
A.2.5
march between slicts ("process slices" DFD. Bvbble 4.6)
test each Cell Vertex to see if it's scalar valuec is greater than tile
isovalue(s)
If a vertex is so classified, lboolvan OR a flag withi that, vertex #
Resuilt of flag aftepr all vertices are cla sified is the index into the
2.56 case tab~le.
A1I0
A .2.5.2
deterimine unique case ("rn arch DFD. Bubble 4.6.2)~
Access 256 case table with cell index, retrieve unique case #
A.2.5. 3
output stats ("march DFD., Bubblc 4.6.3)
. 2.5.4
get list of vertices from prefcalculated table ("march" DFD, Buibble 4.6.4)
Access 256 case table with cell index to retrieve list of vertices;
Set a temporary pointer t~o vertex list array;
A. 2.5.5
gel intlerpolated points 8) normals ("march"' DFD, Bubble 4.6.5)
/* For each unique case. the cell edges for interpolation were identified
and app~ropriate tianiflationq were selected, For examnple. unique case
3 requires interp~olationls along edges 04. 02, 13, and 15.
Two possible triangulationis can be chosen, but would not alter the image.
so the choice is arbit rar.\ in this, case. Now that the cell edges to
interp~olate along are knowni. t lie translation lookup tables can be accessed
wvith the offset calculated ab~ove. */
Access translation lookup table to retrieve preinterpolated points arid
normals,11 along cell edges.
A .2.7
output geometry fil (**Top Lce'ci DF[). Bubble 6)
Catenate the two temporary files, with the appropriate header and
trailer information attached:
A~l
shelliflag  argument. is a file name. File contains list of isovaluies.
xviI)dowflag  argument is a range of isovalues. e.g.. w :30.040.0
lboxflag indicates a box wvill be drawn around imiage  only used] for
artificial volumes such as embedded math functions.*
listofgeom..filesflag specifies that multiple geomn files wvill
lbe generated.
patlfiag  specifies path for output geomietry files. Default is the
directory path of the source code.
dataieflag  specifies type of inpuit file(s)  3D) SCALAR FIELD DATA.
Can be medical or artificial volumes. Artificial volume dlata, is default
and reqjuires redirecting stdin using i. For exampiille
interp~olated points k, norinals =*poini: and normals interpolated along cell edges
from cell vertices to the isovaluie(s) providled by the uiser.*
points and normals = i poi nt,~  [lie originmalI poi iii olfl Iedl froin the 3D)
'
A\ II
Appendix B. Va'nilla Marching Cubes
This appendix describes the imlementation of the vanilla miarching cubes al
gorithm developed by Lorensen and Cline in 1987 (34). It is termxed val.] !a because
my imp~lementation does not include any enhancemenits such a5 texture' 7apping
geometric ,olid modelling, or dlisambiguation. The first part of this appendix in
decisions and about my implemientation are discussed. Then. the mnain step~s in the
tation is (discussed. Following this, use of the 256 element transformation table i.s
explained. After this, the cell edge Interpolation wethod is p~resented. Then the
taincerneis niadle to thle public dlomain marching cubes codle Is present ed. Laslly.
(ow wlent s onl thle 1rmat of' the miarchinig cubes, on t~plt. are discussed.
1B. I1 Jlod(bChll
Ile Ilritail ri cuibe. algori tlrni (34 ) is ai 3D) inagilig nwi(t hod that exltract s at
Sutrface of i oterest hrorn a 31l) wolunle of (data. Th'le surface Is re[)resented a,; it :31)
t rianguIa r mnesh and rendered by a st andard polYgonal based gra phics reniderer. The
allgoii thiml lpocesse, ci bes or cornpiu tat ional cells, where at c(ll is coinposedI of eighit
VON(+,. forur each froin two adljacent dalta SliceS. Each Cell IS aria Ivid to (let ermit~e
If' (hle surflace ol, Interest Intersects thle Cell. Triangfles are' geneorated wit hut cells
that. are found to0 containl a portionl of thle Sui a1C'e. Surface detection vithll a cell
zero (0vertex). Vertex classification in this marner yields 256 possibly different
cell classifcations. Lorenlsen and Clinie reduced this to 1.5 unique cell cases. Tri
angle vertices are determined by linearly interpolating the voxel 3D points to the
B. 2 Background
before deciding on a thesis topic. I thenr explored thle u,,e of kriging to somlehow
th suiac
inirov exr(tioi.Ii wadI \ilhelnis and Celder's (50) x\ork oii mnt acell
Scala r value est lillat ionl. Froml there it wa. :3iiflfle to see Ihow kriging. colid be' used.
But. I still investigated ot her surface extractors to determinie thle history of marcingi
cubes all(I to see whetlici kiignig (oi11(1 be app1liedl in ot her ai as. This led ime in1o
he a rea of :31) med icai imiagi ng, Itouw where mlarch ing cub~es as w~l as i nanv other
As sen fi oi chapt ci two, the two prima iy surface met hods are en ben I Ic based
anld cell i niterj)olat ion. Tlme' ttieri lle based approa lies were developed I~ Ileninam
Lill and Vd'upa in lte late 1970's and early 1980*s. In contrast to thle ctiberli le based
methods. the ell interpolation metiods, ate inore li'ei'isticallY based. TIwo of thle
celi interpolation algorithms are Lorenscrn and Clines' marching cubes (34) and (8)
and Wyvill and McPheetcrs soft objec, algorithmns (.52). 'The latter two methods
are called cell interpolation techniqjues because they interpolate polygonal vertices
to the isosurface boundary along cell edges, \%here at cell is a paiallelepiped with
eight adjacent voxels as vertices. Cells are also known as computational cells. The
implort ant distinict ion bet weein voxel based surfiace ext ractors such as cuberilI I based
miodels and cellbased surface extractors is the fo iner assume a cons5tant scalar value
throughout, the voluine lenevt (thle voxel). 'vherea, th? latter assumne a varying
After reviewing the 3f) imiaging literature. I chose to continue eN1)!0riIn,1 the
cell interp~olat ion~ methlods for thle following reasons. First. I did not wvisht to develop
a specialized m, .. which would liit its it!,e o, f't .tre re;e'arch 10t potentially oiil
one a pplication. Cell interpolation Imd ho~is have beeni widely used in both mii
cal iniagng and scientific visualizadviai. Add ;&l( all, the grapht h"arecal inol2 ods
toed in thle cniheidle icnodlel are \ ery cuoiilx to ini)) lei and oftIeni yied inu.
that are jag'ged in alplparanne (Ibecaus~e of ithe 21) smrf ace display uitouedl , uo
face) wit hout special shading piocdre,IIcs 11C (1). 111 contrast. thle cell i nterlola
ion methods(1 are much si mjlei lo uinderst and an;d impjlemient an1( all yield very
high quia 1itv in ages wit hoitt special sha Iillti ethlods nsvd . b~eyond(l he Ii adlit ioia I
BI3
critical part of 3d visualization, the ability to estimate %,tryingscalar' values within
cell and visualize direct results of the estimation process 1)rie th1metst
and Gelder (50) provides a strong framework within which to visualize new estima
tion methods. The work done here should have p~otential benefit to many other 3d
The main drawback of the cell interpolation a pproa ches is the simplistic method
of segmenting the ob~ject of interest from the remainder of the volume. This method,
termed thresholding. makes a binary dlecision at each voxel  does the voxel con
tribute to the final image or not? Volume methods do not make such a simple
inst ics (su1ch as color. andl density) to the fitnal imnage (33) and (16). Again. the
emplhasis of this research is not to compare volume and~ surface Intel hods. b)ut to
algorlihm. This version onlY set 11p t he pm c'calculat ed Ialle and performed thle Inl
terpolat ioun t p. but1 (lid not dlo t hie inlio.s dithi( 1ill task. 111,11 of conipunt M.i~ iorninals
fromt gradlient. information. It basically d idsteps, 25 Ini 1he list presente(lIn time nexNt
able aind mnodifiab~le. As stated in dna pier 2. the imiaichi hg cube algori ihm mnarches"
COMlRutational cells b~etween two slices of data. See figure 11.1 for a pictorial rep~re
sentation of thle m~arching. The slices are labelled according to the order of arrays I
Lorenson and Cline (.31) described the marching cubes algorithm in 1987. The
polygons to represent the isosurface and to approximate normals for shading. The
following is a list taken from the 1987 article, which denotes the steps performed in
Thie reiuainlder of tIls sect ion dliscusses hlow Sonie of t he above Step)s were uuii
1 lCImi('ed. I do not (lisciss the steps that are st raigut fot ward from an understanding
115
slice 3
slice 2
slice1 7/
slice 0
j V~vxe i~jj+I x
~ i+I~+~
~voxel
V0lre 13.. C(omipuit atijonal (.(,] (Cube) marching ! ci weve dat a ;IIC(IS
First. I discuss the implementation of step 1in the marching cubes algorithm.
Creating the precalculted table mentioned in step 4 is actually the first step
that must be done. This table contains 256 entries  one for each of the possible cell
vertex classifications. Each entry also has associated with it a list of vertices that
To create the table. I first analyzed all the 256 possible cell vertex classification
cases to determine the 15 unique cases and mapped the remainders to the unique
cases (figures 2.6. 2.7, and 2.8 depict the unique cases). To simplify this process and
obtain accuracy. I reduced human error as much as possible. I used tinker toys to
represent a cell. with labels attached to the cornrs and marked to indicate 'ertex
numlbers. I also use(l a presentation graphics package to out put 256 pictures of a
numbered cell wit h1a numbered segmented rectangle below to hold thelbinary value
of the case (See figure B.2 (a)). I analyzed each case by marking the appropriate
numl)ered labels for the case. coml)lement iMg the vertices if necessary. and rotaing
le cell to colre,plond to the clas.sification of a unique case. l'ach entrv in the 256
elemenlt table contains the oider of the cell verlices and the corresponding unique
case, The order of verlices corresponds the order of those specified in figure !B.2
I7
6 7 IF
2 Transform
0 (a) 1
1 0 1 1 11 1 LRotate
7 65 43 21 02
Case 94 (5E hex)
we have just narched to the next cell and case 94 (5E hex, 01011110 binary) is
encountered. The original orientation is seen in figure B.2 (a). In a case such
as this, Lorensen and Cline point out that "'Complementary cases. where vertices
greater than the surface value are interchanged with those less than the value, are
equivalant" (34:165). Therefore, figure B.2 (a) is transformed to figure B.2 (b). which
when rotated, matches with the same vertex classification as unique case number 25
(see figure B.2 (c)). The major table entry for case 94 contains 25 for the unique
case index and also contains the cell vertex ordering 5 1 4 0 7 3 6 2. Another table
called the translation lookup table is ubed to map case 25 intt, triangles base(] upon
previously ,.omputed interpolation points along the cell edges, and uses the ordering
in the table in place of the normal ordering 0 1 2 3 15 6 7 (which is only used in
unique cases).
Cell edge interpolation is the process that determines where the isosurface
intersects a cell edge. The cell edge must have one vowel valte gi('at er than the iso
surface and the other less than the isosurface for cell edge interpolation to occur
13.9
One area where mnarching cubes implerentat ions call differ is whenl cell edge
interpolation occurs (step 5). My code interpolates in a premarching step. That. is,
every time a new scan plane (slice) of data Is read in, it is Immediately processed
to findl thle interpolation poinlts along cell edges where the surface is estimated to
cross. This requires a total of seven arrays, each with the dlimensions of a slice 
twNo for thle or''iial slice data, four for thle x and y interpolation values for each of
thle two p~lanes forming tile cells, and( one array for the z interp~olation values. Two
xand y Interpolation arrays correspond to each of the two slices of data that cells
will "march" between. Only one array is needed for interpIolationi points in the z
dimension, because of the geometrN of' tile slices anl the orientation chosen. Ths
canl e wen in figure B..3. which depicts thle correspondenrce between interpolated
p~oints along cell edges andl these arrays. The x27 represent points interpolated ,in
the \ dlirect ion where. i corresp)onds to the I th posit ioni in tile array, jis the slice
niumber. either I or 2 (thus 2 \ iliterpolatioil arrays). Not ice thle p~oints Only ie
along tlv' cell edges In the x direction. The, !1, points represent the same for the
y interpolat ion atiravys. Thie z,*s represent initerpolatedI poinlts ill tile z direction. If
tlie isosurface does not intlersect a part icu lat edgIe. that coriesponii~g entry in the
The following is the nt erpolatioll lo0ri1liul 1usd to fill t111he Intersection p)oint
along a cell edge. This formula applies oi] along a single component (xxy, or z)
sIince ai cell's edge lies only in one( of dth ire coordin ate directill~s.
Hif
uzcomnput ational
101 21
Fll
.tj p)ositionI ill Jtlh % inlerp arra%
i  it 01 positol ~ll
liSingle z iI1terj) ari'a%
Figure 13.3. ( orresl)0Inle of inierpolat ion arraYs to compul atioiial cells mat ch
ino, bet ween~ Iwo slices
Given:
The translation lookup table is later used during marching to access interpola
tion p)oints in these arrays. The advantage of this lpreinterpolation method is that
shared triangle vertices are guaranteed to be the same because only one edge i., ever
processed, which reduces computation time; wvhereas, during marching each internal
edge is processed twice. H~owever, the disadvantage is the memory required to mali
lain arrays that contain the interp~olated points. Even more overhead is required fi
t his method to Iinplenment step~s 1 andl 6. which calculate Uhe triangle vertex normals.
B. 6 Nornial Calcuilation,,
My codle produces the normials during the preinterpolat ion, step. TIe following
central difference gradivint operator is used t.o est itnate the out ward dIirection of' the
surface at a part ici lai voxel (I.j .k) along filie three coordinate axes (3,1:165)
1312
Gj, k) =(D (i + 1,J. k)  D (i  L. j, k))/ A.x
D(i,j,k) is the density, value at voxel ij.k and Ax. .Ay. Az are the length!, of
the. cell edges in the corresponding component. Once the cell normals are estimated,
they are interpolated along cell edges to the isovalue using the same interpolation
formula used to find the triangle vertices. InI the vanilla marching cubes algorithm I
hancllc boundary cases as special cases when determining the vertex nornials. That
is. voxels along the border of the enclosing rectangular volumre are assigned the
outward facinjg normal along the enclosing volume (see figure 1B.4).
The original code I started with1 was writ ten to be fast ( t houghi it was Jprac
I icallv unusable because it did not produce wirnals nor planair polyois) withlout
1he code had was that it (lid not linaint ai ii a t a ulardl logical order of lie origi iial
Slice., whli reading" t heml in from thle (la files. Only Ih luiterpolaw d planes were
Swapped to maintain orleI . Planles InI thle (code are (hle Salle as airay correspond ig
to Slices. Planes are swapped to reuse the previously read slice for thle next process
B~ l3
Fnclosing volurie
Boundjary case
Normal (10,00.0 0)
Bcuflcary case
0,maOOQC. ',n)
__ZZ
x
est
101 imal
all planes after each slice is processed. I cut the size of the translation table in half,
but it has to be reInitialized after each slice ;s processed. However, the code is much
The triangles are output in the form of an A FIT geometry file, which expects
all the points to be listed first. followed by a list of polygons whose vertices are
indicated by referencing line numbers of the above mentioned points. The AFIT
geometry file is then used as input to a slightly modified version of AFIT's GPR.
The code for the vanilla marching cubes implementation was written originally
inC' and consequently it was functionally oriented. It was later converted to C++
I I.i
Appendix C. Cell Stbdi*vision Impienentahjon
inl more (letail than presented inl chapter four of the thesis. First discussed is som~e
terminology to help understand the rest of the appendhix. After that., the purpose
of cell subdivisioni is, presented. Then the cell subdivision Implementation steps are
dlescribedl.
0. 1 Terminology
Before proceeding. I present some terminology that eases the following expla
nations. I consider the initial computational ct.!ls In the in~ain lou as major cells (see
figure C.1 ). The newly created cells wit hin a major cell. I term minor cells. Major
cells hax e the original voxels as vertices. so I con.lider minor cell vertices as ninlor
IOxels. Finally. I consider the arravs within it a IoIU cela I)dvie
islices.
The p~rimllar1 1)111 (,r cell subd(i visionl ale 1o()disalihigilat e ambiguious Cell
Ca~ses, Mid~ to dleriye at bett er alpprOxima1lioll of the ko,. irface. Ani aligiiols cell
obccuirs wheilever mlore than one top)ology can he Chowni fot he Cell. C'ell snhbdivisionl
(lisalihigiiates an) amilioI~s cell b\ S111)ivi6ding 'Iit( minlor celk. EBvn Ihoiigh t he
.subd i vided amibigill Il ni or (ell is nlo hon~et a i igi ii 'u ( hwca use 11 Is 110 lot gvi detll
C I
minor cell
with). cell subdivision does not, guarantee that minor cells will riot be ambigruouls.
\Vilhclmns and Gelder (50) handle ambiguous minor cells by choosing one lopology
also b~e used to increase image fidelity. This is done bvf uig the surface within
1ii inor. cells versus thle itch larger major cells  t ha t is.dila resolution is increased
by foriiiing1 minlor cells. Increfasing Ilet data resolttion to imipiove iiage fidelity
is the salme idea behlind the diViditig Cli) igol itlill (8). VVVI
('t )Oilit plinmiiives
Su1bdi vision mnet hod' Subd)(ividing call also be u~sedl Io icrease resol lit loll Iln onily one(
Tea ppen dix ent it led 1)isamibigim Iion ll En11lince~d Si rfa ct PtInrentlat (",lSilid~i 
b%11l)
sw lmj~emls an e'xamlpie Ilhal helps, rlarif% %%h\ cell iibdi islioii cant Iimlro, imag, (11ti1lhN
dimension. equivalent to creating cubic voxels in the cuberille dat a model approaches
by creating new logical slices between the original ones. rIhis method is uised Ill the
This section lists the cell subdivision steps I implemented and describes each
5~. A pplv\ marching cubes surface ext ract ion with l majol cells" to forml surlface.
Step I is basically the same step as, inl thle vantillIa mat chimit cu be'. (vuic) unI
to ca Icilate t lie gradenmt . The central (lifFreiene grad icitl opet atot l i (; oNxl
61~
vaile' sill rouinding"' a miaj"or cell for the calculat ion (sefit' .. This step) (lfers
f oil Ill(. X111Wimiplettetitat ion i H thle handillinrg of Ihorder caises. Thle vin, code d a Is
witl)hborder case, (Su~ch as, the first and last dat a slices andt' edges of data lc I
:
approximating the surface normal at these major cell vertices wilth a met hod ot her
thani the central difference gradient operator. This Is because the 64 surroulilng
voxel values do not exist in these cases. Instead of the gradient operator in these
cases, the normals are applroximnatedl by the vector normal to the surrounlding p~ar
Since two of the estimation techniques (tricubic Interpolation and kriging) usedl
tin the coll sub~di vision process need 64 surrounding values at all times, I hlandledi the
border case. differently in the cell subdivisioni impllemefltationi. First. I always assume
there are 64 surrounding v~oxel values. To (10 this. I Ignore the dlata onl tile edges
of the 2D data slices used as Input. This is not a problemn because ]in most medical
Image (lata slices several rows and columns of edge values (10 not contribute to the
meaningful portion of the data. Because of this assumption, I Insure the artificial
volumes I (reat e are cent ered vi thin tie volume, wvit h at least one arrav posit ion inl
the x and y dimensions as a bunffer zone. Since I cannot ignore the first aiil last (lat a
slices. I create two a slices t~o replace them. The valutes for these dunimvy
dlmlv(at
slices are( copied from thle , ire I hiey are iiiat ilig.
Step 2 Is tithe same miarchilng t hat occurs fint die vmr implement ation. ( omiit a
tional cells ar", miaichied betweeti two data slices (see figuire .1.1 ). fin cell suil)(ivisiOii.
iliese cells are nlot polygoitized, bitt ale subd)(ivided. This subdivi.iori is dlescrib~ed it)
C
Stfqp 3  Cell subdivi.sion
is discussed.
my 'inc implementation to "'march" withini each subdividedl major cell. Since I use
a vmic imp~lementation, data slices are assumfedl to be read into memory; therefore,
simulate reading data slices inlto mnemory. Sice I never use miore than four data
x'mc impler;.entation, data slices already have values and points associated with
them when read into arrays in miemory. However, both values and points must be
calculated for iniislice a rnavs. The points are calcullatedl from subdivision factors..
specified prior to execution. Three subdivision factors along each of the three major
axes are set (e N.f= 3. N~ = 5. f7 = 2 meanis subdivide the cell in) the x direction
into three parts. iii the \v direct ion lino five parts.etc.). Figure (C.1 depicts a major
cell subdivided int1o eight min11or cells where the ,ulbdivision factor is two inl each
01iC 111mor1Ivo~ed polit at thle mindpolit of eachi mla jor cell edge. one nhinor01vo~el jpoilit
inl I lie center of each imia ior cell fare. ai oI0I il(,imioi voxeA poinmt in)t he very center of
he major cellI. Figurie (C.2 depicts a hmajor cell subdivided ito 15parts inl all tIlirev,
Ii rect ioiis. The ca Iculat ionl for di viding a major c'ell edge into 5 parts is
Once a mntor  oxel point 1is dc ferined. sralai va te i as,1p ned to ite p.int.
.St(J 4. Estimate scalai' values
values must be estimated at the minor cell vertices. I implemented three functions
1.o estimate values at, minor cell vertices. These are trilinear interpolation, tricubic
chapter three and the inmplermentationi details are in chapter four. Once thle minlor
the previous appenix to acconlhish cell subdivision. The first vmic implementation
is usedl in the outermost. l00o) to read1 the actual data slices inito memnory. rhere are
two pri rnai v loops in the systemn. Nkit Iiin this out eimost loop, imajor cells are formed.
The only tasks thec first, \ inc imnplemenitationi does is read dlata into memnory arid foirmi
major cells. The surface is act uallv formied by the second ;nic impllemnentationi. which
Thie secoiia uc iminplemient at ion t teats major cells a,, stil)\xol iiiies. ext ract intg
sub1surfaces from them Thius the secondl primary loop nia rches," within major cells,
Aii ng p)0rt ions of the isosurface. Tliat is. for each majoi cell wi ii a voutme of 31)
dat, the second vmic irnlplcnmet atioii ext ract s a subasui face fhoin each stinbvolumle
(major cell). The major task htis IS o ilislc I li o ita (oniti nots surface ext racted
not only b~etween nitor cells but also between sul)volumes. A continuous suirface
means the triangle vertices and normials are the same at shared locations. Fortu
nately the vmc imp~lementation described in the previous appendix insures surface
how miinor ce'! vertex normials are approximated on the boundary of' major cells.
Recall fromi the %mc algortithm that surface normals are approximated at cell vertices
at all minor cell vertices comp~letely contained within a major cell. However, the
inor cell vertices onl the borders of major cells are handled specially to insure inter
subvolume surface continulity. First. those minor cell vertices that are the same as
the ma jor cell veirtices are assiglmed thle same normal value as the niajo c elli c'.
C
Tihis ISpos.sible because pie iiiarching inteirpolat ion of both points and normials is1
perforined inI the outermnost loop. Then. prior to marching within a miajoi cell. I
calculate cell fa ( norm a Iaverages and cell edge' normial averages to use oil tlie o1 hci
bou1 ndary cases. T[hese norlinal averages shared bet weenl su ) vol u ii es ilistiles ii ite('
v.
su rface suili \'ol[IIII(, coiit iniiit*1
After the points, val ties. and noria Isaile e!simat ed for the in inioi cell \ elICC's.
l then inlter'l)olate pollit s anld niormals in a preniarch i g step (the sai ne as in lie vI](
inplelleit (Ition
III tlie previous cha p1 ei
decCri bed InII to deterniilie thle sur1face ni nol
cell intersections, Next. I mnarch" mninor cells between minislices and output tran
gle vertices and normials to an Air Force Institute of TecThnology (AFITr) geometry
file. To render the Surface. I call a mod~fiecl version of the ARIT General Purpose
Renderer (GPR) to do Pliong illumination and Phiong shading. Anot her applendlix
1 (C9)
Appendix D. Disambiguation and Enhanced Suface Repesna 7io
by Cell Subdivision
This appendix describes how cell subd(i vision can dlisambigua te ambiguous cells
and how it can enhlance the surface representation. First. some b~ackground inforina
tion is presented, then an examiple is lpresenitedl that helps demonstrate the p)urp~ose
of this appendix.
D.1I Background
1as well a, \'NV.ilhelins and Geler (50) demonstrate that subdividing cells
estimate( i ntriacell scalJar values. ( an cause a smoother represent at ion of thle surface
generated by cef llitci pola; ron. V sing a subhdi vision factor of 5 initach (Iinicnsion.
I explore b)ot t Wii er and triculbic esi mat ion funict ions ini at tifici al volumes Thie
tri linear function generates a 1bettIer surface fit than the vaiilla mtarchinug cu bes. but
is still far Fromt thle desired surface. 'Iricuit i(i et imna tionl event perfortm't bett er in these
anti ficialI voltunes. 'Ii icublic estI iIla Lion causes Ihle surface ext ract ion to getterrit a
Closet repi esent atiot of thle actnia surface I Itait he 1.Y ifitear d(oes. IThe aut hors cited
above cl aimi the I ricu bic is,bett et at est'i ria Iinug )oints wit hini the' cell beca use it ises a
larger neighb~orhoo~d of points wit iout assutntg Iinva ri t H owever.~ this assumttpt ion
jtta~r rot be \ali(l foi chIt a \\it It slim rp conrnast s withIini a smnal Iiteiglibot hoodI of \o\C5.
I) I
A larger n~eighborhood may in fact cause erro~rs in data with sharp) contrasts. lKriging
estimation canl also use a larger neighborhood of control points, Thle promising nature
of kriging is that it guarantees the best' linear estimator and the neighborhood size
Subdividing amlbiguous cells does not guarantee that amb~liguouis cells will be
removed. The minor cells created from thle subdivision p~rocess may lbe ambiguous.
\Vilhelnms andc Gelder (50) apply the facial averaging techniqjue, describ~ed in chapter
two. to disambiguate minor cells. That Is. i1'a face is ambiguou. the average valuie
obtained by averagingr the four face vertices is tested against the isovalue. If the
average is greater than the isovalue, the 1vertices are connected. else the 0vertices
are connectedc.
In all the artificial dat a sets I cr( ate. ambiguity is completely removed with
out the need for (lisambliguati uiorCells. llowe~er. tis does not occur. in thle
1111g1
Medical data sets I tested. In the mrtificial data set., the surface gCnerated by cell
initerpolat ion ap~pears smoother the higher the subdivision factors and depending
of) the e'st imat ion f[iiit ionl. "I'lenext sect ion eXjplores how is Smnoot her sulrftace
1). 2 bEww p/
The Jpolygoniza tion in thle miarchin g cubles algorit hin or all\ cell interlpolatuon
D2
Figure D.1, Alternate p)olygonization for case 6
cell. For example. another triangulation of unique case 6 of' figute 2.7 is depicted in
figure D.1L
Note that only 1hW a nhbigUOUS CaSes of the uniiique case figures in cha pier two
wte 1ru I andhig iu in the sense t hat ahIt iate fpol~gonizal itns (,ani be pei forned
ob)\iout". eveil iough if is reniotely possibIle furt her suit)livision aniid st iinat ion coti 1(
11) uttdri aid how cell sli)diiioti and emtitaial cani (liatil~gtiate ani ~in
otis cells. considier a major ambiguous cell in which a pibil( erroiteots topology is
gerieated (mntajr (''i~ elr to an uninixivided t eli. a in inor cell a sitbdcivided cell).
'Topolog r, fer to0 the 1)olygoization of a cell t hat reprlmt sii a lsort lol of dic Sirface Iop0o
IY3
Without subdivision. erroneous top)ology is at gross error. which not oly~ (cfli'es ii
accuracies Ini the final Image, but canl also create *holes" as discussed Ii hapter two.
Subclixiding this cell will reduce this one large erroneous topology into smaller. cells
where most wvill have iioiiamlbiguous cells, dlepending onl how well (t( e estinliatiOli
function estimates the surface in the cell. Case 14 (see figure D).2) Is a particularly.
goodl example of a rare case, even if no ambiguity results. The reason wh) it is rare
is becaLISe it repi esents aier compl)icated portion of' the Surl'ace topolog.1
chances Of case,( 14 app)earing within an\y of the minor cells is even rarer. This
of figure D.2 A depicts thle surface correctly within that, cell. Trhen subdiivliing
the cell could possib~ly generate the subdivided major cell depicted Ii figure 1).2
13. Ini tlk case the topology remnained the satme. and caise 14 dloes not show upl.
In l*a(t. all iioiieiiipty tumior cells are unique ioiiaiiibigulous calse I inl this figure.
Ini any.\ su i)divisioii. the 1vertices of' the major cell wvill reiiiain 1vertices iii t he
corresp~onin~ig i ior cells because their values (do not change". I lowevelt(\\.ex I
\eii idces may be added oii Ilie iinor cells. Iii th is figure, h I 1vertices, are thle
samlle. F'on sinll) licitvy. I wxiii discuiss otik. one edpe of Il( iuajoi Cell witere, Ilie siirfa c
Figure D).2 C depicts possi blY different minor cell topologies caused bY lhe
'If)
C intersection
point
Figuire D).2. Lx i )(~of altei ndl i NfA C rejptCsetitathio Cansli I, rd!I WANK%
ViiO
a and 1). Figure D).2 C however shows Iilore complicated 1minor1 cell vertex classifi
cations. where the surface intersects between mninor voxels 1) and c. InI figure D).2
C. thle upper right front cell is now unique case 3 instead of' I and a niew%nonemipty
mninor cell exists  thle lower rihtont cell. This cell is atot her' unique1 Case I. Thi1s1
does not imp~ly that figure ID.2 A is a wrong topology, it just means that in this case.
figure D.2 C cap~tures the sirlace intersection point on edge E more accurately.
Of course there are miany other possib~le top~ologies wvithin thle iini1 cells, too
numerous to list here. The point of the simp)le example presented is to shiow that
sliId~v'son (an (lisamlbiguate and] cause the cell interpolat ion to provide a closer
applroximat ion of the actual surface. If minor cell vertex value estimation is accurate.
the new surface itersect ion points should be closer to the true surface boundary. thus,
generatin a triangular mnesh that better applroxiimates thle actual surface of nil (West.
S1ubdividing t he celli even [fi rt bet sIholi d generat e evenl closet' su rface iitersect ionl
pint s. Againi. resoluition of atnibigitous inor cells Is not guiaranteedl. buttits stated
previouisly. t he few. ainbigitomis jinntor cells hlat may result1 canl be dealt wit It by fa( ial
1) 6~
Appendix E. Bzrtary Image Format to Utah fiLE Format Convers'ri71
It is often very usefull to look at just. a single slice of data. especially if tile data
is from CT or MNRI scanning technologies. However. most of the data is inl binary,
so it must be converted to an imnage format. I chose the Utah RLE format hecause
Ican View anl RLFK, Image from any of time different types of workstations we have at
the Institute. Mv code assumes binary files aN inplut. but canl be easily modified to
Thle majority of Image files are stored as I bytc per value. with values stored
inl scanimie order from bottom to top. 'Theref~ore the double for loops used to readl
Somle differences ill file form11ats were discovered dI(iing t hiis effort. The C'hapel
TO (letermlle If I he Iimge format is 'uuIIi ra't ei I iy I lhe s!In call (onl a Sunl Coiisole)
Thc I t a ii HI., 1Koolkit has ! Itines, that ditIlie same task (gi'avtorle and raw
(data. For example. I miodified if to) omiplite the histogim oIf tilie
(data. The miajor
benefit of the code is that it has the basic format for reading binary data, which is
t'
Appendix F. Changes to the Air Force Instte of Technoloqy
several hidden surface removal. reflection model, and shading model i mplementa
tions. This p~lethora of' options coii'ecjuentlv makes, the executable qjuite large. Since
mlemlory Is a slcrc reIsource whenl (lealing wvith volumes, I decided] t~o minimize GPR's
imemory usage. I (lid this by removing unnecessary p~ort ions such as code handling
text ure mlapping, vertex colored polygons, Bezier patches, scanl line zl)1ffering, etc.
I retained the X buffer and 'A' buffer hidden surface removal Imnplemlentat ions, all
;\lthomgl this reduced the execut able size by over al half. C PR still required
too miuchi ililmorv for a single ge.olmet r\ file outpult froml mly minlungll cub~es im1
lelien tat loll. I at Iemp? ed to fix this p~rolem~ by cineat iu a list of geoimet ry files.
sI( ; P l? is capa l le of reading mini It iple fi le". "Ihe code is sup~posedl to hev immemnlorv
corrc
I.I((l v.
C IPR allocates many arrays. It Is possible thle GIMN ('+ + a Ira)\ (lea io(at lOll
doe., not work properly. W\hen array deallocation is at? ('ipt (' according to Schildt
(38 :337). t lie g+ + warmring mess'agre "amray size vX Jpre~si;lt fom (deete igniored" results
(e.g. delete [pcouiit] J)IltiflveCts). Schilcit indicates the importa;,ce of' t his operat ion
(;giiored by g±+):
One reason that youn ieedl to specify the number of elements in thle ar
ray'N
to the delete operator is so that the proper number of' destructor
functions can lbe calledI (that is. one for each object in the array).
case., using thev same dlata st ructures usedI bY GPR. I discoveredI that the arra.\ size
had no affect onl leallocat ion. The main factor appears to be the order anl(l tilie
sizes of the memnory blocks allocated. I found that if a smaller block of imeinoi isl
allocated first and then freed. a larger block cannot use the just freedI space because
it is not large etioughi (both1 malloc( ) frce( ). and new.delet e appeared to operate the
samle). Th'lerefore. for thIe best use of U nix m'leniory1 i ia nageiniei it. large llock.shou l(I
becaw
u GTCPR alIlocates ma nx block, of' \ diryi n, size. based] onl lie polygon count ti
he
vertex ('011111 aiIn ui1viimh11er of, yerti(e(' per polygon. To k~eep fiomn alt ci iiia( G'Pl{
sign ifica ikl to order mnillrv al loca tions (0 (c ~.I ch(Js(, to imiiplenment a \c rv
mug cilbs is bet \\vem thbree all( fike mnegabytes per file. If (IPl successfully miikes
one b~lock of inerory large enough to handle the largest file in thle list of files, out put
fromt marching cubes. This size varies based on the isovalute(s) selected anl(] the itit
dlata resolution: howevei . I found the niaxinimn needed never exceeds 10 megabytes.
The memory block is allocated once at the beginning of GIPR it)n ain( I~by
Since memptr, is type) casted as a voidl '. portions of this block can be Cast to
an t pe. An offset into t his memory block is miaintainedl for assigningy newI me]mor10.
B~efore each geometry file is jprocossed. this offset is reset 10p zero. Thums. It0 (lelette
or free olperat ions are( nereh~aiX* anild te samev nivinorv is retised over aII(I over.
Appendix G. Creating A rtficiai Volumes
This appendix discusses the merithods that create an artificial volumle. Ani ar
tificial volume (term taken fr ii Tiede (43)) in this wvork is one in which scalar
values are artificially e.,tered at node points in a 3D array to represent some object
surface, versus a volumie containing voxel values which are gencrated by a. scanning
volumes I created for thi.5 work contain surfaces depicting threedimensional mnath
I implemented, the surface is always centered within the first octant (positive x,.Z)
by subracting the center point (h.k, from the points 'x.v.z) in the math equation.
The Initial method I clew loped is very straightforward, but does not allow
setting at surface threshold ot her Ihan 0. 1 accompJlishied this I)Y assigning to each
voxel the value returned by evaluating the mnath function at the voxel 3D point. For
examp~le, a voxel value at miesh point (x.y~z) for a sphere Is determined by:
V~u(x
VQC 4 i)= II( )(.7
=" h)' f (y k) + 1)' r
Funct ionis definling it Surlface retum'i posit k e values omn one side andl negativ.e
values o1] (lie other. wvhere stirface points are evaluated at, zero. Since nmarchming
cubes interp~olates triangle veilices to the sralar value bet ween voxels, this met hod
generates it volumiie In t he correct formnat for miarchinig cii bes to read . 'T i~is so
l)Ccauso a Lifl'ace defined by a mlath functAin Will rare]\, if'ever Intersect ani art ificial
volume voxel. Of' course, the larger the volume, the b~etter chance,, this will O(cur.
Lt (ol Phil An.burn extended the inethod just dIiscussed to allow any scalar
value to represent the surface. Since the valuie returned from a math function evalu~
ated at a niesh point denotes the distance from thle surface in a positive or negative
dlirectionl. this distance is used to taper off a value from the scalar value chosen. Also,
to get the out put in marching cubes formula. I modified the algorithm to taper off
towards the negative if the point is oin time negative side of the surf'ace and towards
thle p)ositive if tile mnesh point is Onl the p)osit ive side. The formula for this method
di= fabs(val'ue~x, y. z)
co~rdl
(I/Ut(a. 1/.:) I I,INDS
1)1.,, T A
I I.USed A HiTs viewit program onl thle Siliconj Graphics :3100 series workstations
Ito view time artIficial voluimmes. I also iliciided a conmmaund inIe Opt ionl t~o draw a b~o.\
G2
Appendix H., Trilinear Inteloaton
This met hod assumes scalar values vary linearly along complonenlt directionS
between \voxels (45:1114). It Interpolates in each of the three dimensions, see fig
u,e 11.1, where 1'(01) and f(rii) represents the scalar value at original point i and new
(or intermediate) point i, respectively. The goal is to estimatec f(n6) given f(OO) 
f(07).
he
utilvaue
ente
sivey i deermied.Thatis PIfOO) = f(O1)flOO)
dll
The!I. f(n 1) IS found by Interpolating between f(02) andl f(O03) )inthe same
manner. f(nD) Is foundi~ 1.\ interpolating between the Intermediate values f() I ) andI(
Q(i:3). The f inal value 1(116i) Is determilned bY interjolati g bet ween) f'(114 and f*(u).)
There is oneC SIgnificant jlrOlblein With thiS met hod that redCeIICs accuiracy of
est im~a tes Iii a t hreedimuensionial dlata set. Only the eight surirouind~ing voxels are
aim k Zed 'o estimate a new va ime at a part icu Ia poinit wit i in the cell. wh en ii fact
IhemeI
eit sam ples muay not provide suf ficient inzforniat.ioii to infe I lhe variahil it \ of'
he data. Nlore impoit ant ly however, thle lat'a may not1 vary lineairly along" major
cell edges.
A 1)101 dinI of' less signficance 'is the direct ion InI whlichi to in Iei'polat e in it ially
IlI
because f(O) arid 1'(15) are based on the first set of intermediate interlpolatio)s
in this assumed directioni (e.g.. the Y direction could be assumed initialY  thenl
potential cause of inaccuracy because Jor example, values may not vary the samie
f(02)
II2
Bibliography
13111
1T5. Delfiner, P. and J. P. Deihomme. 'Optimum Interpolation 1)\ Kriging." Display
and A nialysis of Spatial Data edited by John C. Davis and Michael .1McCillagh.
901 14. New York: Johin Nkiley &~Sonis, 1975.
16. Drebin, Robert A.. et at. "Volume Rendering,"' Comnputcer Graplic,;, 22(0)65
74 (August, 1988).
17. Dubrule, Olivier. "~Comparing Splines and IKriging," Computers & Geosciflnces.
10(23):32738 (1984).
18. Durst. Martin .J. "Letters: Additional Reference to Marching Cuibes.' Con?
puter Graphics, 22(2):7273 (April 1988).
19. Farrell, Edward J. and Rosario A. Zappulla. "ThreeIDimrensional Data Visui
alization and Biomedical Applications," CRC Critical Reviews in Biome~dical
Engineering, 16(4) :32336.1 (1989).
20. Folev. .Jamies D., Andries van Damn Steven K. Feiner and .John F. Hughles.
Corn palter Graphics: Principles and Practice (second Edition). AddisonWkesley
Publishing Company, 1990.
21. Fox. John. Linear Statistical Models and Related Metdhods W'ith Applications5 to
Social Research. JohnD Wiley & Sons, 1984.
22. Fiieder, Gideon. et al. "Back to front Display of VoxelBased Objects," IEEE
Comnputer Graphics and Applications, J(I):5260 (Januiary 198,51,
2:3. Fuchis, Henry, et al. "Optimal Surface Reconstruction from Planar Contours."~
Communications of the A C'M 20(10):693702 (October 1977).
21. Ganapathy. S. and Ti. G. Dennehy. "'A New General Triangulation Method for
lllar ont olw sF. Comnputer Graphics. 16'(3):69 71 (.1 uly 1982).
25. Grant . Michael. The A4pplication of Kriginig in. Ili Statistical Ainalyspis of
.4ntbroponictric Data V1olaime I. MS thesis, AMIT/GOR/ ENY/ ENS /90M 
S. School of Enoineering. Air Fo rce Institute of Technology (AU). Wright
Patterson AFB3 OH, May 1990 (AI)A220 61:3).
26. Ilerna n. Gabor T. . et al. '"(omipiter Trech iiqles fol. tile Hepresent at ioi
of' Ilrevdiniensional D~at a on a 'Iwodimensional IDisplav7 SPIE'. 367:3 11
(]()82).
27. Hermian. Gabor T. "A Survev of 31) Medical IniaginugTechnologies. IFEl:l
PF*tginiing iii Mlicine (and Biolo!/. (I):15 17 (Decemnber 1990).
2$8. flernian, Gabor T. and( .Jayarain 1K. Uduhpa. 'Tisplay of 31) )iglial lin
ages: Comnput at ional Foundations and Medical Applications." IEF,' ('om puir
Graphics and Applications, .3946 (August. 1983).
29. .Joinhel. Andre ('. Findamcntals of (kostaistic., lit Fir( Lrsson.%. W'ashiington.
1). C.: A\inericall Geophysical Un ion. 1989.
13112
:30. heppel, E. "Approximiating Complex Surfaces~ by Trianguilation of Contour
Lines,*' IBil Journal of Research and Dcrelopincut. 19(1):211I (.January 197.5).
:31 . Xerbs, Lynda. "GEOStatistics The Vairiograi." COGS (ontc , Cont ribu
tions, 12(2):5459 (August 1986).
:32. Laur, David and Pat, Hanrahan. "Hierarciiical Splatting: A Progressive Refine
mewnt Algorithm for' Volume Rendering." Cominpthr Graphics. 25(4) :285288
(Jidv 1991).
:33. Levoy, Marc. "Volume Rendering and Display of Surfaces from Volume Data."
IEEE Computer Giaphics and A4pplications, 2937 (May 1988).
31. Lorensen. \'illiami E. and Harvey E. Cline. ".Marching Cubes: A High Resolu
tion 3D Surface Construction Algorithm." Coinputhr Graphics. 21(//):1631 69
(July 1987).
:35. Matheron, G. "Principles of Geostat istics." Economic Gco/oqy. 58:121666
(1963).
:36. MNcCormlick, B~ruce K1. et al. "isualiation in Scientific Coinpitingj Computer
Graphics. 21(6,):17, AI to 0>18 (November 1987).
:37. Mendenhall, William. et al. A11laihemnatical Statiqics ith Application,.. PWVS
1KENT Publishing C!ompany, 1990.
:38. Sclildt, Hlerbert. C~++., th Comnpht. !?cfti nc. MchGrawh1ill. 1991.
:W9. Stxtz. M. RI. and 0. Frieder. "~TlreeDimensional Medical linaging Modalit ies:
Anl Overview,"~ Critical I~~ 4in? Iioincrial Enqinu ring. I,1 Issit( 1:27541
(2754 199().
10 . stvtz. MailiiR. am I Oph~i lrieder. CThupuit c Sysl eni fo 'I'llrev Di ilucnsioialI
lDiagnost.ic Imiaging: Nnii aminiat i of thSate of the Art ." Crficti!a ThRu mrs
in Ihtomncdical Engtinu ring. 19(1): 145 (1991).
.11. Stvt z, Mart in Robert. lb yf(Dimacasinnal .ihn'ial Imnaye A1nalys., L,ilnq Local
Dyna rnic A lgorith ma.Sc(ion O)n aI .1fhtiph Intruction. Jbinltiph I)ata .4icri
thctiit. 1
P ill) (liseritation., Ill!). (Onil)Ier Scienice aidiVl Liigiiiecriiig akt thle
1Iniversit v of fi chigaii. 1989).
12. Sungiioff. Alexander kind Donald Gieeliberg. "( oliupiier Geiieraied Imlages for
MledicalI Appl)1ica tions. (Onp t( Graphic,~. 12(l~): 196 202 (1978).
13. Tiede. Iif. et al. liiOwigat ion of Medical 31).leiidering AlIgorithmiiis. IL
('ompuhr Owhi. ndaI,1 Apii
l)Jlliaon. 52 6~2 (May 1990).
41. Udiipa. .iayarain 1T 'lntei active Segmental ion and Houidarv Surface Fbi'
nial ion for 3I) ligtal Imiaes." ('onmpnh ' Graphc.. and1( lmai Ihrcsxiny.
/N(3J)11 3235 (NlarrT 1982).
45. I'dmipa. .Jayarain 1I. a md ( 4dir TU IInnani. edit or,. .)'/) bmimng /in .11f dicin(
Bosiom: ('lW P~re", 1991.
R: ViSible olne Ren"der
46. Upsoni. Craig juid Michael Keeler. \' I lTF
"...
(oIputer Graphics., '2(/,):5961 (August 1988).
.17. Upson, Craig, et al. "The Application \'iualization Sytem \ Comiputational
Environment for Scientific Visualization." IEEE Comp(ur Graphics and Appli
cation.s. 3041 (July 1989).
4S. \\atson. G. S. "Smoothing and Interpolatio) by Klriging and with Splines.'
M1ath(malical Grology. 16(6):60115 (1981).
49. \Vatt., Alan. Fundamentals of 7hreDmcnsioltl ('oinpuh r Graphics.
AddisonWVesley Publishing Company. 19S9.
50. \W\iihelms, .Jane and Allen Van Gelder. Topological (on.,idraoo.5 iM Isosurfa((
Gcnration. ?lechnical Report. UniversitY of (alifornia. Santa Cruz. CA. April
1990.
51. \Vilheins. Jane and Allen Van Gelder, "A Coherent 1Pro e,ion Approach for
I)irect Volume Rendering," Computer (Giaphic., 25(/):275284 (.Julv 1991).
52. W yvil!. G., C. McPheeters and B. WVyvill. "Data Structures for Soft Objects."
1hr Iisual Computfr. 2(4):227234 (August 1986).