Vous êtes sur la page 1sur 18

A Distortion Camera for Ray Tracing

P. Acquisto and E. Grller Institute for Computer Graphics, Technical University Vienna, Karlsplatz 13/186/2, A-1040 Vienna, Austria

ABSTRACT Ray tracing is a powerful technique for realistic image generation. Typically a simple camera definition is used, whereby a 3D environment is mapped onto an image plane either by an orthographic or perspective projection. The concept of the usual simple camera definition is extended in several ways to achieve distorted views or projections of the object scene. The origins of primary rays are not required to lie on a plane anymore. Thus projections onto curved image surfaces are possible. The directions of primary rays may be chosen according to various nonlinear functions that allow nonstandard projections of the environment. The concept of centers of interest (coins) is introduced that enables to concentrate on especially interesting portions of object space. A center of interest is a 3D position that either distorts (attracts) a portion of the image surface or locally influences the directions of primary rays. The results of a test implementation are presented to show the feasibility of the methods presented. Several applications of distorted images are: creating elemental holographic images and raster omnimax images, virtual reality and arts.

INTRODUCTION

Ray tracing is a powerful technique in computer graphics for realistic image generation. Typically a simple camera definition is used to map the objects of a 3D environment onto a 2D image plane. The pinhole-camera model is a very simple approximation to the way the human eye perceives the 3D world (Figure 1).

pinhole camera

perspective projection Figure 1: pinhole-camera model and perspective projection The perspective projection (Foley[5]) is closely related to the pinhole-camera model and works as follows: given an eye point and an image plane a 3D point p of the environment is projected onto the image plane by taking the intersection of the image plane with the straight line segment between eye point and p (Figure 1). In the past a lot of projections differing from the perspective projection have been used that do not try to model human perception but are more appropriate for certain applications.

With Parallel projection the 3D environment is mapped onto the image plane using parallel rays (Figure 2). This kind of projection preserves length, angle and area on planes parallel to the image plane. Therefore certain properties of a 3D model can be easily deduced from its 2D image. Cartographic projections (Paeth[14]) are another class of important projections that map the surface of the earth onto 2D maps (e.g., stereoscopic projection, cylinder projection, Mercator projection). Some of these projections are either length, angle or area preserving, although due to the nature of the problem (mapping a sphere onto a plane) no projection can fulfill all three properties at the same time. The angle preserving characteristic of the Mercator projection, for example, has been extensively used for navigational purposes.

Figure 2: parallel projection Nonstandard projections have been used by artists as well. In some of the paintings of Pablo Picasso more than one point of view is integrated within a single image thereby producing strange effects. For example, one might see a person from two differing views simultaneously. M.C. Escher is one other prominent example of an artist that worked, among other things, with unusual projections, distorted views and distorted images (Ernst[2], Escher[3]). Multiple views within a single image are also used for generating Omnimax images (Greene[9], Max[12]) and holographic images (Haines[11]). Glassner[8] shortly mentions possible modifications to the camera

model used in ray tracing. In the following section an extension of the pinhole-camera model (perspective projection) is defined and integrated into the ray-tracing technique.

VIRTUAL CAMERA Ray tracing using perspective projection to calculate a raster image proceeds as follows: The image plane is subdivided into an array of rectangular pixels. For each pixel a ray starting at the eye point and passing through the pixel center is cast into a 3D environment to determine the first intersection of the ray with an object if any. If there is an intersection the color of the pixel is calculated by evaluating an illumination model (point light sources, area light sources, ...) and the properties of the intersected surface. Global lighting effects like shadows, reflections and refractions are taken into account by tracing additional rays starting at the point of intersection. For details on ray tracing see Foley[5], Glassner[6], Watt[17]. For each pixel a ray with origin on the image plane and direction given by the pixel position and eye point is processed. The separation of the concept of raster image from the image plane and type of projection allows an easy extension of the camera model (Figure 3).

Figure 3: raster image in parameter space (u,v), org(u,v), dir(u,v) We now assume the raster image to be defined in a 2D parameter space, where each pixel is indexed by u and v. In addition two sets, a point set org and a vector set dir, are given which define a so called virtual camera. org(u,v) is the origin and dir(u,v) the direction of the ray that is taken into account to calculate a color value for pixel (u,v). The sets org and dir could be specified as values of a function mapping 2D parameter space into 3D points or 3D vectors (org: R2R 3 , dir: R2R 3 ). With an appropriate definition of org and dir (points of org are lying on a plane, lines defined by the sets org and dir intersect in one point, i.e., the eye point) the usual perspective projection results (Figure 4).

Figure 4: perspective projection defined through sets org and dir In the general case the above concept does not pose any restrictions on the sets org and dir. For example, the set org may be an arbitrary point set or may define a non planar image surface (distortion of image plane). Arbitrary sets of org and dir, however, will usually produce only an arbitrary set of color values. Contrary to our intention no comprehensible image of the 3D environment would be generated in such cases. To achieve interesting results the sets org and dir should at least vary only smoothly with respect to u and v. Small local changes without discontinuities are preferable.

SOME SPECIFIC FUNCTIONS FOR ORG AND DIR In the following some specific types of sets org and dir are discussed in more detail. Furthermore introducing dependencies between org and dir allow a simple specification of virtual cameras, e.g., direction dir(u,v) could be taken as the normal vector at position org(u,v) of the smooth surface given by org. Some of

the combinations of functions discussed below, e.g., org defined by a sphere function and dir defined by a constant vector, may produce interesting results, while other combinations, e.g., org given as a constant point and dir as a constant vector, are not particularly interesting. It has to be pointed out that parametrizations that are used in the following functions are not unique and do have a great impact on the produced image. This is one of the reasons why for certain combinations of the sets org and dir the outcome is rather difficult to predict. plane function With this approach points of org are assumed to define a rectilinear grid on a plane. A simple parallel projection is defined if the directions of dir are taken to be parallel. Distorted images result in this case by choosing more elaborate direction sets dir. The direction set dir can be defined by a plane as well. single point, constant vector org is given as a single point and analogously dir can be defined to be a constant vector. org defined as a single point and dir defined by a plane function results in the common implementation of the perspective projection. With dir given as a constant vector all directions are parallel and depending on the specification of org an extension to the usual parallel projection is specified. If, for example, org defines a curved surface and dir is a constant vector then a parallel projection onto the curved image surface is given. Clearly taking org to be a single point and dir to be a constant vector at the same time does not produce anything interesting (all pixels of the resulting raster image will have the same color). dependent vectors With this approach the directions of dir are defined with respect to org. If the point set org does specify a surface then dir(u,v) could be taken as the normal vector (or tangent vector) of that surface at position org(u,v). dir(u,v) could be chosen to be a vector with

constant deviation from the normal vector at org(u,v) as well. With org defining a plane this again will result in a parallel projection. sphere function The surface of a sphere is easily parameterized by u and v (longitude and latitude). Therefore org may be taken to be part of the surface of a sphere (projection onto a spherical image surface). Using a sphere for the set dir is done by taking the vector from the coordinate center to the point on the sphere with parameters (u,v) as dir(u,v). If the sphere does not enclose the coordinate center only a restricted class of directions is possible regardless of the ranges of the u and v parameters. Two different u, v values may produce the same direction vector (if the corresponding points on the sphere are collinear with the coordinate center). This results in interesting mirroring effects along a circle (or part of it) which consists of those points whose connections with the coordinate center are tangent to the sphere. Moving the sphere farther away from the center of the coordinate system produces a zoom effect as the class of possible directions is thereby reduced. cylinder function The points of org are chosen to lie on a cylinder surface (or part of it). If the cylinder encloses the objects of the 3D environment and if the vectors of dir are taken to be normal to the cylinder surface the usual cylinder projections well known in cartography result. As with the sphere function, also the surface of a cylinder may be used to specify the direction vectors of set dir. free-form surfaces From the discussion above follows, that generally any parametric free-form surface (Watt[17]) might be used to specify the sets org and dir. There are different ways to define such a surface. The freeform surface may be given explicitly by an analytic formula. In another approach a set of control points for a regular array of parameter values (u,v) is given. Interpolation schemes or

approximation schemes (e.g., Bezier-, B-Spline, NURBS-Surfaces) then define a smooth parametric surface that either interpolates or approximates the given set of control points. If the control points are not given for a regular array of parameter values (u,v), scattered data interpolation techniques (Hagen[10], Nielson[13], Pottmann[15]) are applicable. Taking a differentiable free-form surface to define the set org, dependable vectors (e.g., normal vectors) are an interesting option to specify the set dir. other functions Only a small subset of possible functions for the definition of org and dir has been discussed above (namely those that have been implemented in a test system). One can easily think of a wide variety of other functions and surfaces that are suitable for virtual cameras, e.g., cones, tori, affine transformations of quadrics like ellipsoids, superquadrics and so on.

CENTERS OF INTEREST (COINS) In this section the concept of centers of interest (coins) is presented that produces distorted projections of a 3D environment by allowing the user to concentrate on certain regions of object space. Some portions of the 3D environment might be more interesting to the user than other parts and he would like to get a more detailed view of that portion with the remaining 3D environment still present in the final image (something like a local zoom with a continuously varying zoom factor). This can be accomplished through centers of interest (coins). A coin is a position in 3D object space that marks an interesting portion. It acts like a magnet and locally deforms (attracts) portions of the image surface org and/or locally biases direction vectors of dir (Figure 5).

Figure 5: Center of interest (Coin) p c The deformation induced by a coin is determined as follows: The coin is projected onto the image surface org to calculate parameter values (u c,v c) where the influence of the coin is highest. org(u,v) and dir(u,v) of parameters (u,v) close to (uc,v c) are more modified than org(u,v) and dir(u,v) of parameters (u,v) farther away from (uc,v c). Generally the influence of a coin should decline smoothly with increasing distance in parameter space. Given a point p=org(u,v) the deformation caused by a coin with position p c is calculated as follows (pd is the new position of point p): p = p + w(m c ,l,d,s)(pc p) d The weight function w(mc, l, d, s) takes account of the strength mc of the coin, the distance l of the coin pc to point p, the parametric distance d between (u,v) and (uc,v c) and the stiffness s of the image surface org. A low stiffness factor, i.e., a rather flexible image surface is given, allows a significant deformation only very close to

(uc,v c). The calculation of the deformation of a vector dir(u,v) is done analogously. The effect of more than one coin on the image surface is defined as the sum of the individual deformation effects: pd = p + w i (m c, i ,l i ,d i ,s i )(p c,i p)
i

Selecting a weight function w that is non zero only within a certain radius of influence around (uc,v c) will increase calculation efficiency. The concept of coins is entirely independent from the functional definition of a virtual camera through the sets org and dir. Therefore a combination of coins with the previous discussed methods can be done without problems.

IMPLEMENTATION AND RESULTS The concepts discussed in the previous sections have been integrated into an existing ray-tracing system (Acquisto[1], Strzlinger[16]). The incorporation of virtual cameras into a raytracing system can be done easily without requiring major modifications. Only the camera module of the system that generates origins and directions of primary rays has to be replaced to account for the nonstandard projections of virtual cameras. In general evaluation of the sets org and dir account only for a small portion of the total image generation time. The sample images (Image 1 Image 14) were calculated on a simple PC (386, 25 MHz). Calculation time was in the range of 6 to 15 hours. Table 1 lists some statistics for those images.

image resolution # 1 640x480 2 640x480 3 1280x320 4 5 6 7 8 9 10 11 12


1280x320 640x480 640x480 640x480 640x480 640x480 640x480 640x480 640x480

type of projection
perspective projection org: plane, dir: constant vector, parallel projection org: cylinder, dir: normal vector to org, cylinder axis coincides with x-axis org: cylinder, dir: normal vector to org, cylinder axis coincides with line through (-1,1,1) and (1,-1,-1) org: point, dir: sphere, not concentric with coordinate center org: B-Spline surface, enveloping object half way, dir: normal vector to org org: plane, dir: normal vector to o r g , one coin modifying org and dir org: plane, dir: normal vector to o r g , one coin modifying org and dir org: plane, dir: normal vector to org, two coins modifying org and dir org: plane, dir: normal vector to org, two coins modifying org and dir org: B-Spline surface, all control points on a plane elevated above object, dir: normal vector to org org: as 11, one corner control point elevated, and diagonal opposite control point lowered, dir: normal vector to org org: as 12, corner points offset by larger value, dir: normal vector to org org: as 13, corner points offset by a still larger value, dir: normal vector to org

13 14

640x480 640x480

Table 1: Aliasing due to the point sampling nature of ray tracing is more severe with virtual cameras as pixels close together may have primary rays with considerably differing points of origin and

directions. The nonlinear projections of virtual cameras cause aliasing effects that vary in magnitude locally. Therefore antialiasing is done by adaptive sampling (critical regions are sampled more often). It turned out that images produced by using virtual cameras are rather sensitive to parameter selection. Choosing the appropriate parameters (e.g., parameters for the weight function of a coin) might be cumbersome in certain cases. CONCLUSION An extension to the usual camera model of ray tracing has been given. For each pixel (u,v) two functions org(u,v) and dir(u,v) are evaluated. org(u,v) is the origin and dir(u,v) the direction vector of the primary ray at pixel (u,v). Several functions org and dir have been discussed. The concept of centers o f interest (coins) was introduced. Coins allow to focus on certain portions of the 3D object space (local nonlinear zoom). Coins are point locations in the 3D environment that distort (attract) the image surface and bias directions of primary rays. In the following some ideas for future work are mentioned. Virtual cameras might be considered for the form factor calculations of non-polygonal objects with the radiosity method. We have not investigated this application in depth so far. One can think of an interactive system for the definition and modification of virtual cameras. Given a fast hardware, specific parameters of virtual cameras could be animated. How about an animation where the strength of a coin mc is increased from zero to some finite value? Examples where virtual cameras might be used are arts and virtual reality. If you have the freedom to define your own virtual reality why not take a look at it with your own virtual camera?

REFERENCES

1. Acquisto, P., "Virtual Irreality: Eine Verzerrungskamera", diploma thesis, Technical University Vienna, 1992. 2. Ernst, B., "Der Zauberspiegel des M.C.Escher", Heinz Moser publisher, Munich, 1978. 3. Escher, M.C., "Grafik und Zeichnungen", Heinz Moser publisher, Munich, 1979. 4. Foley, Th.A., Lane, D.A., Nielson, G.M., Franke, R., Hagen, H., "Interpolation of Scattered Data on Closed Surfaces", Computer Aided Geometric Design, Vol. 7, 1990, pp. 303-312. 5. Foley, J.D., van Dam, A., Feiner, St.K., Hughes, J.F., "Computer Graphics: principles and practice", Addison Wesley, 1990. 6. Glassner, A., "An introduction to ray tracing", Academic Press, 1989. 7. Glassner, A., "Graphics Gems", Academic Press, 1990. 8. Glassner, A., "The Theory and Practice of Ray Tracing", Tutorial Note 1, Eurographics'91, Vienna 1991. 9. Greene, N., Heckbert, P.S., "Creating Raster Omnimax Images from Multiple Perspective Views Using the Elliptical Weighted Average Filter", IEEE Computer Graphics & Applications, June 1986, pp. 21-26. 10. Hagen, H., "Scattered Data Methoden", Proceedings of Visualisierungstechniken und Algorithmen, Vienna, September 1988. 11. Haines, K., Haines, D., "Computer Graphics for Holography", IEEE Computer Graphics & Applications, January 1992, pp. 37-46. 12. Max, N.L., "Computer Graphics Distortion for IMAX and OMNIMAX Projection", Proceedings of Nicograph 83, December 1983, pp. 137-159.

13. Nielson, G.M., Foley, Th.A., Hamann, B., Lane, D., "Visualizing and Modeling Scattered Multivariate Data", IEEE Computer Graphics & Applications, May 1991, pp. 47-55. 14. Paeth, A.W., "Digital Cartography for Computer Graphics" in Glassner[7], pp. 307-320. 15. Pottmann, H., Eck, M., "Modified multiquadric methods for Scattered Data Interpolation over a sphere", Computer Aided Geometric Design, Vol. 7, 1990, pp. 313-321. 16. Strzlinger, W., Tobler, R.F., "FLiRT-Program Source und Dokumentation", Institute for Computergraphics, Technical University Vienna, 1991. 17. Watt, A., "Fundamentals of Three-Dimensional Computer Graphics", Addison Wesley, 1990.

Vous aimerez peut-être aussi