Académique Documents
Professionnel Documents
Culture Documents
Abstract
This research is the rst application of fractal compression to volumetric data. The
various components of the fractal image compression method extend simply and di-
rectly to the volumetric case. However, the additional dimension increases the already
high time complexity of the fractal technique, requiring the application of sophisticated
volumetric block classication and search schemes to operate at a feasible rate. Nu-
merous experiments over the many parameters of fractal volume compression show it
to perform competitively against other volume compression methods, surpassing vector
quantization and approaching the discrete cosine transform.
1 Introduction
The problem of managing extremely large data sets often arises in applications employing
volumetric data. This has prompted research in new techniques for economizing both storage
1
space and processing time. Data compression techniques reduce the size of volumetric data,
converting the array into a representation which is stored more eciently.
Fractal techniques based on iterated function systems [11, 2] have been successfully ap-
plied to the compression of one-dimensional signals [3, 31] and two dimensional images [13, 8],
by nding a fractal representation that models the original signal as closely as possible, and
storing the model instead of the original data. Fractal volume compression uses analogous
techniques for the encoding of three-dimensional volumetric data.
This research in fractal volume compression is part of our recurrent modeling project,
which focuses on the abilities of the recurrent iterated function system [3] as a geometric
representation. Fractal volume compression provides a rst step toward the use of linear
fractals to model arbitrary 3-D shapes.
Section 2 reviews the fractal image compression technique and summarizes four items of
previous work relating to fractal volume comrpession. Section 3 extends the components of
the fractal image compression method to volumetric data. Section 4 discusses optimization
of the search used by fractal compression methods, allowing fractal volume compression to
perform its search in a feasible amount of time. Section 5 brie
y describes decompression al-
gorithms. Section 6 lists the results of various experiments and compares them with previous
results. Section 7 concludes and oers directions for further research.
2 Previous Research
This section begins by reviewing the essentials of the fractal image compression method,
mentioning elements that directly contributes to the fractal volume compression method.
Compression researchers have only recently applied their methods to the task of com-
pressing volumetric data. There appear to be only two previous extensions of sophisticated
image compression techniques (vector quantization and the discrete cosine transform) to vol-
umetric data. In addition, there is previous work in fractal compression of 3-D image data,
but only in the form of animations and multi-view imagery. The rest of this section summa-
2
rizes these previous techniques whereas their results appear in Sec 6.7 for better comparison
to fractal volume compression.
3
to the volumetric case due to its increased time and space complexity. A block classication
scheme [26] trims the search space by a constant factor [13], segregating searches within
simple edge, mixed edge, midrange or shade block classes. The most promising method
reduces the time complexity of the search from linear to logarithmic by adapting existing
sophisticated nearest-neighbor techniques [27].
4
blocks rst expands the data by a factor of ve with shading information, then compresses
the result by a factor of only ve due to the space-lling arrangement of blocks, resulting in
no compression, just fast low-quality rendering.
6
the variance of their brightness. Range blocks were also subdivided when necessary. After
entropy encoding, the parameters describing the fractal transformations, a 17-view color
multi-view image of a toy dog was coded at a bit rate of 0.1095 bits per pixel with a PSNR
of 37.52 dB.
Fractal coding is appreciated for its resolution independence. Since fractal transforma-
tions simply describe a contraction from one region of the dataset to another they can be
used at any resolution. Articial data was interpolated from the compressed representation
by decoding at a higher resolution than the original dataset. This feature supported the
approximation of a 51-view 3-D image with a 17-view image, which eectively improved the
bit rate from 0.1095 to 0.0365 bits per pixel [22].
7
independently2.
The functional notation V (x; y; z) 2 R denotes the voxel located in the input volume at
the grid point (x; y; z) 2 Z3: Each voxel outside the volume's region of support is dened
to be zero (i.e., a volume with dimensions W H D implies V (x; y; z) 0 if (x; y; z) 62
[0; W , 1] [0; H , 1] [0; D , 1]).
Each voxel is normally quantized to b bits by mapping it to the integer range 0 : : : 2b , 1
thus requiring W H D b bits to store the entire volume directly.
Volumetric data typically arises from the spatial measurement of real-world data, but also
from simulated sources. Medical applications use computed-tomography (CT) or magnetic-
resonance-imaging (MRI) scans. The study of aerodynamics depends on the results of wind
tunnel data and computational
uid dynamics simulations. Computer graphics has found
situations in which a volumetric representation performs better than a surface description
[16, 15]. Volume compression makes the management of such massive amounts of volumetric
data feasible for general use in existing facilities.
The distortion metric
,1 hX
wX ,1 dX
,1
L2(X; Y ) = (X (x; y; z) , Y (x; y; z))2 (2)
x=0 y=0 z=0
measures the similarity or \distance" between two w h d blocks of voxels X and Y: The
notation L2(X; Y ) will be used to represent this value.
8
which denes all the range blocks in this partitioning set. The voxels that comprise any
range block can be enumerated by the function
R(x; y; z) = V (xr + x; yr + y; zr + z); (x; y; z) 2 [0; w , 1] [0; h , 1] [0; d , 1] (4)
which conveniently allows us to reference voxels within a range block without regard to their
global positions within the volume.
Adaptive partitioning subdivides range blocks that fail to encode satisfactorily. Larger
range blocks yield a higher compression rate, though typically at a lower delity. The
encoder rst attempts to nd maps for large range blocks. Each range block is subdivided
into eight children. If the coding of the range block results in a distortion above a specied
threshold tmse for any child, then that child block is coded separately (and possibly subdivided
itself). If seven or eight children require separate coding, then a short stub code replaces the
range block's code, and all eight children are encoded separately. The overhead associated
with tracking this hierarchical coding requires each (non-stub) parent code contain child
conguration information.
10
re-sampled with the same spatial resolution as a w h d range block:
n,1 n,1 n,1
Cn D(x; y; z) = w 1h d
X X X
D(nx + i; ny + j; nz + k): (7)
i=0 j =0 k=0
This eectively applies an n3 averaging lter to the domain block followed by a simple
subsampling operation. Here we have forced the dimensions of the domain block to be
integral multiples of the range block dimensions, but the following equivalent decimation
operator Cn0 can be used to contract w0 h0 d0 domain blocks without this restriction:
1 w ,1 h ,1 d ,1 D x+i ; y+j ; z+k
0 0 0 $ %!
Cn0 D(x; y; z) = :
X X X
The transformation pool T is the set of all possible transformations of the form
T = Ik G; O Cn (12)
which can be parameterized by an isometry index 0 k < 8; a \contrast scale" and a
\luminance shift" : For eective quantization the value of is chosen from a nite set
f0; : : :; n,1g (determined a priori) and is mapped to the nearest integer. We will use
the notation T (D) to denote the net eect of the transformation T 2 T on the domain block
D 2 D:
4 Searching
For each range block R 2 R a transformation T 2 T and domain block D 2 D are sought
after that yield a suciently small distortion measure L2(R; T (D)): This search dominates
the fractal compression process, and its extension to volumetric data causes this search time
12
y x ,z Rotation of
z ,y x about the six
,x z y edge-edge axes.
,y ,x ,z
,z ,y ,x
,x ,z ,y
z x y Rotation of (2=3)
,y z ,x about the four
,z x ,y vertex-vertex axes.
y ,z ,x
y z x
z ,x ,y
,y ,z x
,z ,x y
Table 2: Fourteen more rigid body cubic rotations shown in terms of their result on the
input vector (x; y; z):
13
x y z Identity
,x ,y z Rotations
,x y ,z
x ,y ,z
,x y z Re
ections
x ,y z
x y ,z
,x ,y ,z
Table 3: Eight non-cubic transformations.
to greatly lengthen.
Designing a fast encoder hinges on the eciency of the search for matching transformed
domain blocks. This problem is approached by introducing heuristics to guide the encoder's
search towards regions of the dataset that share common characteristics and are therefore
more likely to provide self-similar mappings. A classication scheme [26] provides such
guiding [13]. This scheme used an edge-oriented classication system designed to alleviate
both the time complexity and edge degradation problems inherent to block image coders.
This system does not easily extend to the realm of 3-D block classication as a consequence of
the complexity added by the extra dimension. One previous volumetric classication scheme
[22] thresholded the sample variance of voxels within a block. This has the undesirable
eect of avoiding rapidly convergent transformations that map high contrast regions to low
contrast range blocks.
Even with classication, this search for self-ane maps still remains sequential. Instead
of segregating blocks into a set of predened classes, associating a real-valued key for each
block replace the linear global block search with a logarithmic multi-dimensional nearest-
neighbor search [27]. This solution is readily generalized to 3-D block encoding and allows
14
for much larger search spaces which improve coding delity but also require larger domain
indices.
pool. The algorithm in Table 4 outlines this \brute force" solution. Every range block in
the partitioning set R is examined for which the entire domain pool D and transformation
pool T is scanned for the optimal transformation.
One can prune the exhaustive search of D T by solving for the optimal gray-value
transformation coecients. If we represent range blocks and contracted domain blocks with
the m 1 (m = w h d) column vectors [r0 : : : rm,1]T and [d0 : : : dm,1 ]T ; respectively, then
the optimal choice for and provides the least squares t to an over-determined system
of the form Ax = b: 2 3 2 3
6
d0 1 72 3
r0
6 7
6 7 6 7
6
d1 1 7
r1
6 7
= : (13)
6 76 7 6 7
6
6
6
... ... 74
7
7
5
.
6
.
.
6
6
7
7
7
6 7 6 7
4 5 4 5
dm,1 1 rm,1
If we consider uniform domain blocks (i.e., d0 = d1 = = dm,1) non-admissible then
the columns of A are linearly independent, the matrix AT A is invertible and the unique
least-squares solution x = [ ]
T is
16
Let d = E (di ) = m1 mi=0,1 di be the rst moment or mean of di (similarly for r = E (ri )),
P
and let d2 = E (d2i ) , (E (di))2 be the second central moment or variance of di ; and let
rd = E (ridi ) , E (ri)E (di ) be the second central moment or covariance of ri and di ; then
2 3 2 3
rd =d2
x = =
6
4
7
5
6
4 :7
5 (15)
r , d
The orthogonalization transformation O yields d = 0: The oset value simply becomes
the DC component r of the range block. The value i closest to from the set fa0; : : :an,1g
is selected as our contrast scaling coecient. is simply mapped to the nearest integer.
It is well known from transform coding that much of the energy in an image represented in
the frequency domain resides in its DC term. Therefore, it is critical that the decoder can
faithfully reproduce from whatever quantization scheme is used.
19
each range block in the next volume. At each step of the iteration, the domain pool is
rened. As this process continues, it converges to the decompressed version.
In implementation, the process actually needs only one volume, partitioned into both
range and domain blocks, which is decompressed onto itself at each step of the iteration.
Each range block is overwritten with the transformed contents of the appropriate domain
block. If this overwrites a section of the volume later accessed as domain block, then some
or all of the domain block would be as it was in a later iteration. Some care is necessary for
hierarchical (e.g., octree) representations to make sure that the larger more lossy \parent"
range blocks do not overwrite previously decoded \child" blocks from a previous iteration.
Faster techniques exist for decompression fractal-coded images [20], and could be easily
extended to the volumetric case. However, these techniques are typically designed for anima-
tion playback, and volume visualization systems are not yet fast enough to render dynamic
volume datasets in real time.
The fractal volume compression algorithm can be adjusted to permit direct rendering of
compressed data, avoiding a complete decompression step. Separating a volumetric dataset
into a set of possibly overlapping \macro-blocks" supports on-demand decompression during
rendering [32]. Such macro-blocks may be incorporated into fractal volume compression by
selecting domain blocks within the macro-block containing the range block. In this fashion,
each macro-block of the compressed volume may be decompressed independently.
6 Results
The fractal volume compression algorithm was tested on a variety of popular, publicly-
available datasets, and its performance measured using a variety of metrics, to promote
better comparison with other existing and future volume compression methods.
20
6.1 Measurement
Several quantitative methods exist for indicating the delity of a compression algorithm. In
the following, V is the original volume with dimensions W H D; and V~ is the decompressed
volume. The function "(x) represents the error between V (x) and V~ (x) :
mse = W H D (19)
Another way to express the dierence or \noise" between V and V~ is the signal-to-noise
ratio (SNR). Several versions of SNR exist, which can cause confusion when comparing
results. The signal-to-noise ratio used in [24], denoted SNRf ; measures the ratio of the
signal variance to the error variance:
var V~ (x) E ( ~ 2(x)) , E 2(V~ (x))
V
SNRf = 10 log10 var "(x) = 10 log10 E ("2(x)) , E 2("(x)) : (20)
If "(x) and V (x) each have a mean of zero then SNRf is equivalent to the mean squared
signal-to-noise ratio
SNRms = 10 log10 x V (x~) :
P
~ 2
(21)
L2(V; V )
Even for datasets that are not zero-mean (as is the case here) this is often used to measure
the quality of the reconstructed signal.
The peak-to-peak signal-to-noise ratio, PSNR, is dened
21
6.2 CT Medical Data
A 2562 113 12-bit CT head dataset (containing voxel values ranging from ,1117 to 2248)
was use to test the fractal volume compression algorithm over several dierent range block
sizes w h d and octree subdivision down to an edge size no less than two voxels. An mse
distortion threshold value of tmse (the maximum allowed average mse per voxel) determined
which child blocks were recursively encoded. The domain pool blocks were spaced x = 4;
y = 4; and z = 2 apart, yielding approximately 230; 000 domain block records for each
octree partition level. Encoding times were measured on a dual-processor HP-9000/J200
with 256 MB of RAM, and do not include le I/O time.
Table 5: Encoding results for a 12-bit 2562 113 CT head using an mse octree partitioning
threshold of 4608.
Table 6: SNR for decoded 12 bit CT head using 1 to 6 iterations of the stored transformations.
This same dataset was reduced to 8 bit voxels by mapping each original voxel v to the
range f0 : : : 255g by round(255 (v + 1117)=3365): The 8-bit volume was compressed using
various mse child threshold values tmse to determine when octree subdivision would occur.
Whereas images are generally compressed in their rendered state, volume data requires
further processing, such as classication, shading and rendering [18, 7], before being displayed
as an image. Hence errors introduced through compression yield dierent artifacts in the
volumetric case than in the image case. An isovalue surface rendering, such as Figure 2,
puts volume compression techniques to an extreme test. Moreover, faces are typically used
to analyze compression delity since humans have developed visual skills specically for
interpreting subtle changes in facial expression.
Table 7: Encoding results for an 8-bit 2562 113 CT head using an using varying mse octree
partitioning thresholds
24
Figure 1: Slice #56 of the CT head volume dataset: Original (upper left), 18:1 43-voxel
range blocks (upper right), 22:1 6 42-voxel range blocks (lower left), 22:1 83-voxel range
blocks (lower right).
25
Figure 2: Isovalue surface rendering of the skin (isovalue = 50) of the CT head volume
dataset: Original (upper left), 18:1 43-voxel range blocks (upper right), 22:1 6 42-voxel
range blocks (lower left), 22:1 83-voxel range blocks (lower right).
26
For example, a simple run-length encoding of the CT head dataset reduces it to 68% of its
original size whereas the RLE method failed to compress the MRI head dataset.
Table 8: Coding resulting for an 8-bit 2562 109 MRI of a human head with varying octree
partitioning threshold values tmse : Range block size varied dynamically from 163 to 23: A
domain spacing of x;y;z = 4 was used along each axis.
Figure 3 demonstrates the delity of the fractal volume compression on a slice of the MRI
dataset. Even at the extremely high 729:1 rate, the skin and bone edges are reproduced, but
the textured detail is obscured.
27
Figure 3: Slice #54 of the MRI head volume dataset: Original (upper left), 20:1 (upper
center), 25:1 (upper right), 30:1 (lower left), 43:1 (lower center), 729:1 (lower right).
28
block size comp. SNRf (dB) PSNR (dB)
23 8.59:1 21.31 40.60
22 3 11.09:1 20.24 39.53
32 2 15.35:1 18.99 38.30
43 12.14:1 20.98 40.26
42 6 16.29:1 19.08 38.39
83 13.59:1 20.72 39.99
163 13.91:1 20.47 39.76
323 13.96:1 20.00 39.30
Table 9: Comparison of compression rate and delity versus range block sizes and hierarchy
depth of the MRI head dataset, using an mse octree partitioning threshold tmse = 18 and
domain spacing x;y;z = 4 along each axis.
Table 10: Coding statistics for an 8-bit 2562 109 MRI of a human head at various levels
in the octree using a partitioning threshold value tmse = 36 and domain spacing x;y;z = 4
along each axis.
29
partition total local search admissible global
range size range codes codes found domain blocks
163 1,792 1,443 47,703
83 744 96 42,992
43 879 356 36,445
23 274 273 29,218
Total codes transmitted = 3,689.
Table 11: Coding statistics for an 8-bit 2562 109 MRI of a human head at various levels
in the octree using a partitioning threshold value tmse = 900 and domain spacing x;y;z = 4
along each axis.
Table 12: Results from compressing an 8-bit 2562 110 CT scan of an engine block. 83-voxel
range blocks (octree partitioned down to 23 voxels) were used with a partitioning treshold
of tmse = 18: The domain spacing x;y;z = 4 was used along all three axes.
30
Figure 4: Original (left) and compressed (right) rendering an engine block MRI.
31
6.6 Dicing: Integration of Macro-Blocks
Fractal volume compression supports the rendering of a volume directly from the compressed
version by rst dicing the volume into macro-blocks [32] (groups of blocks), and then com-
pressing each macro-block independently. Each macro-block may then be decompressed inde-
pendently. Hence large datasets may be rendered on workstations lacking sucient memory
to contain the entire data, and only visible sections of the volume need be decompressed.
The 8-bit CT head was also diced into 323 -voxel macro-blocks, which were compressed
individually, to measure any loss of delity due to the limited domain pools. Since the
macro-blocks were signicantly smaller than the entire volume, a tighter domain spacing
was allowed, and resulted in an impressive compressed volume delity PSNR of 41.28 dB
(SNRms of 27.50 dB). The compression rate for such a volume remains under investigation,
but will surely beat the 11 : 1 ratio in Table 7.
7.1 Applications
Contrary to its name, fractal image compression performs better on sharp edges and worse
in textured areas. Hence, fractal volume compression performs better on clearly delineated
regions, such as bone and skin, but worse on tissue or nely detailed areas. The DCT volume
compression method blurs both edges and ne detail [32]. Hence, fractal volume compression
appears well suited for the compression of classied, ltered datasets, and also of synthesized
data sets.
Fractal compression techniques tend to perform better at high compression rates com-
pared to other methods. For medical volume compression applications, any data loss due
to compression is likely unacceptable. Hence, volume compression in general may nd its
most suitable application in medicine for the high-rate low-delity indexing, previewing and
presentation of volume datasets, for which fractal compression techniques are the best choice.
33
7.2 Future Research
Fast rendering algorithms appear available based on the fractal representation. As [23] pre-
ceded [24], further research on fractal volume compression will likely produce a fast rendering
algorithm.
Fractal compression research is much newer than other compression techniques such as
the DCT or VQ, and the technique is still not well understood nor fully developed. As new
enhancements appear for fractal image compression, they will likely improve fractal volume
compression as well.
The Bath fractal transform [19] operates without a domain search, and could be easily
extended the 3-D and applied to volumetric data. This technique compensates for the loss
of block diversity by augmenting the ane block transformation with linear, quadratic and
even cubic functions.
Compression of volumes sampled on an irregular grid would require a more sophisticated
partitioning and block-transformation scheme.
7.3 Acknowledgments
Several volume visualization systems were used during the development of fractal volume
compression [1, 17, 29]. Data for the CT head was obtained from the University of North
Carolina-Chapel Hill volume rendering test data set.
This research is part of the recurrent modeling project, which is supported in part by a
gift from Intel Corp. The research was performed using the facilities of the Imaging Research
Laboratory, which is supported in part under grants #CDA-9121675 and #CDA-9422044.
The second author is supported in part by the NSF Research Initiation Award #CCR-
9309210. The third author is supported in part by the NSF under grants #IRI-9209212 and
#IRI-9506414.
34
References
[1] Ricardo S. Avila, Lisa M. Sobierajski, and Arie E. Kaufman. Towards a comprehensive
volume visualization system. In Proc. of IEEE Visualization '92, pages 13{20, Oct.
1992.
[2] Michael F. Barnsley and Stephen G. Demko. Iterated function schemes and the global
construction of fractals. Proceedings of the Royal Society A, 399:243{275, 1985.
[3] Michael F. Barnsley, John H. Elton, and D. P. Hardin. Recurrent iterated function
systems. Constructive Approximation, 5:3{31, 1989.
[4] Kai Uwe Barthel, Thomas Voye, and Peter Noll. Improved fractal image coding. In
Proc. of Picture Coding Symposium, March 1993.
[5] J.M. Beaumont. Image data compression using fractal techniques. BT Technology
Journal, 9(4), October 1991.
[6] Wayne O. Cochran, John C. Hart, and Patrick J. Flynn. Principal component analysis
for fractal volume compression. In Proc. of Western Computer Graphics Symposium,
pages 9{18, March 1994.
[7] Robert A. Drebin, Loren Carpenter, and Pat Hanrahan. Volume rendering. Computer
Graphics, 22(4):65{74, Aug. 1988.
[8] Yuval Fisher. Fractal image compression. In Przemyslaw Prusinkiewicz, editor, Fractals:
From Folk Art to Hyperreality. SIGGRAPH '92 Course #12 Notes, 1992. To appear:
Data Compression, R. Storer (ed.) Kluwer.
[9] Jerome H. Friedman, Jon Louis Bentley, and Raphael Ari Finkel. An algorithm for
nding best matches in logarithmic expected time. ACM Transactions of Mathematical
Software, 3(3), September 1977.
35
[10] Alan Gersho and Robert M. Gray. Vector Quantization and Signal Compression.
Kluwer, Boston, 1992.
[11] J. Hutchinson. Fractals and self-similarity. Indiana University Mathematics Journal,
30(5):713{747, 1981.
[12] Geir Egil ien. L2-Optimal Attractor Image Coding with Fast Decoder Convergence.
PhD thesis, Norwegian Institute of Technology, 1993.
[13] Arnaud E. Jacquin. Image coding based on a fractal theory of iterated contractive image
transformations. IEEE Transactions on Image Processing, 1(1):18{30, Jan. 1992.
[14] I.T. Jollie. Principal Component Analysis. Springer-Verlag, New York, 1986.
[15] James T. Kajiya and Timothy L. Kay. Rendering fur with three dimensional textures.
Computer Graphics, 23(3):271{280, July 1989.
[16] Arie Kaufman. Ecient algorithms for 3D scan conversion of parametric curves, sur-
faces, and volumes. Computer Graphics, 21(4):171{179, July 1987.
[17] Philippe Lacroute and Marc Levoy. Fast volume rendering using a shear-warp factoriza-
tion of the viewing transformation. In Computer Graphics, Annual Conference Series,
pages 451{458, July 1994. Proc. of SIGGRAPH '94.
[18] Marc Levoy. Display of surfaces from volume data. IEEE Computer Graphics and
Applications, 8(3):29{37, 1988.
[19] Donald M. Monro and Frank Dudbridge. Fractal approximation of image blocks. In
Proc. of ICASSP, volume 3, pages 485{488, 1992.
[20] Donald M. Monro and Frank Dudbridge. Rendering algorithms for deterministic fractals.
IEEE Computer Graphics and Applications, 15(1):32{41, Jan. 1995.
[21] Donald F. Morrison. Multivariate Statistical Methods. McGraw-Hill Book Company,
New York, 1976.
36
[22] Takeshi Naemuri and Hirishi Harashima. Fractal encoding of a multi-view 3-d image.
In Bob Werner, editor, Proceedings ICIP-94, volume 1. IEEE Computer Society Press,
November 1994.
[23] Paul Ning and Lambertus Hesselink. Vector quantization for volume rendering. In Proc.
of 1992 Workshop on Volume Visualization, pages 69{74. ACM Press, 1992.
[24] Paul Ning and Lambertus Hesselink. Fast volume rendering of compressed data. In
Gregory M. Nielson and Dan Bergeron, editors, Proc. of Visualization '93, pages 11{18.
IEEE Computer Society Press, 1993.
[25] William B. Pennebaker and Joan L. Mitchell. JPEG Still Image Data Compression
Standard. Van Nostrad Reinhold, New York, 1993.
[26] B. Ramamurthi and A. Gersho. Classied vector quantization of images. IEEE Trans-
actions on Communications, 34, Nov. 1986.
[27] Dietmar Saupe. Accelerated fractal image compression by multi-dimensional nearest
neighbor search. In J.A. Storer and M. Cohn, editors, Proceedings DCC'95 Data Com-
pression Conference. IEEE Computer Society Press, March 1995.
[28] Dietmar Saupe and Raouf Hamzaoui. A guided tour of the fractal image compression
literature. In John C. Hart, editor, SIGGRAPH '94 Course #13 Notes: New Directions
for Fractal Models in Computer Graphics, pages 5{1{5{21. ACM SIGGRAPH, 1994.
[29] Barton T. Stander and John C. Hart. A Lipschitz method for accelerated volume
rendering. In Proc. of Volume Visualization Symposium '94, Oct. 1994. To appear.
[30] Pravin M. Vaidya. An O(n log n) algorithm for the all-nearest-neighbors problem. Dis-
crete Computational Geometry, 4:101{115, 1989.
[31] Greg Vines. Signal Modeling with Iterated Function Systems. PhD thesis, Georgia
Institute of Technology, May 1993.
37
[32] Boon-Lock Yeo and Bede Liu. Volume rendering of DCT-based compressed 3D scalar
data. IEEE Transactions on Visualization and Computer Graphics, 1(1), March 1995.
38