Académique Documents
Professionnel Documents
Culture Documents
• The simplest thing we can do is find the faces on the backs of polyhedra and discard them
• We know from before that a point (x, y, z) is behind a polygon surface if:
• This can actually be made even easier if we organise things to suit ourselves. Ensure we have a right handed system with the viewing
direction along the negative z-axis. Now we can simply say that if the z component of the polygon’s normal is less than zero the
surface cannot be seen
Depth-Buffer Method
• Compares surface depth values throughout a scene for each pixel position on the projection plane. Usually applied to scenes only
containing polygons. As depth values can be computed easily, this tends to be very fast. Also often called the z-buffer method
Depth-Buffer Algorithm
1. Initialise the depth buffer and frame buffer so that for all buffer positions (x, y)
- For each projected (x, y) pixel position of a polygon, calculate the depth z (if not already known)
- If z < depthBuff(x, y), compute the surface colour at that position and set depthBuff(x, y) = z
Calculating Depth
Ax By D
z
• At any surface position the depth is calculated from the plane equation as: C
• For any scan line adjacent x positions differ by ±1, as do adjacent y positions
A( x 1) By D A
z' z' z
C C
Iterative Calculations
• The depth-buffer algorithm proceeds by starting at the top vertex of the polygon. Then we recursively calculate the x-coordinate
values down a left edge of the polygon. The x value for the beginning position on each scan line can be calculated from the previous
one
A B
z' z m
C
A-Buffer Method
• The A-buffer method is an extension of the depth-buffer method. The A-buffer method is visibility detection method developed at
Lucasfilm Studios for the rendering system REYES (Renders Everything You Ever Saw). The A-buffer expands on the depth buffer
method to allow transparencies. The key data structure in the A-buffer is the accumulation buffer
• If depth is >= 0, then the surface data field stores the depth of that pixel position as before
• If depth < 0 then the data filed stores a pointer to a linked list of surface data
• Surface information in the A-buffer includes: RGB intensity components, Opacity parameter, Depth, Percent of area coverage,
Surface identifier, Other surface rendering parameters
• The algorithm proceeds just like the depth buffer algorithm. The depth and opacity values are used to determine the final colour of a
pixel
Scan-Line Method
• An image space method for identifying visible surfaces. Computes and compares depth values along the various scan-lines for a
scene. Two important tables are maintained: The edge table and The surface facet table. The edge table contains: Coordinate end
points of reach line in the scene, The inverse slope of each line, Pointers into the surface facet table to connect edges to surfaces.
The surface facet tables contains: The plane coefficients, Surface material properties, Other surface data, Maybe pointers into the
edge table. To facilitate the search for surfaces crossing a given scan-line an active list of edges is formed for each scan-line as it is
processed. The active list stores only those edges that cross the scan-line in order of increasing x. Also a flag is set for each surface to
indicate whether a position along a scan-line is either inside or outside the surface. Pixel positions across each scan-line are
processed from left to right. At the left intersection with a surface the surface flag is turned on. At the right intersection point the
flag is turned off. We only need to perform depth calculations when more than one surface has its flag turned on at a certain scan-
line position
Depth-Sorting Method
• A visible surface detection method that uses both image-space and object-space operations
• Basically, the following two operations are performed: Surfaces are sorted in order of decreasing depth and Surfaces are scan-
converted in order, starting with the surface of greatest depth
• -> First, assume that we are viewing along the z direction. All surfaces in the scene are ordered according to the smallest z value on
each surface. The surface S at the end of the list is then compared against all other surfaces to see if there are any depth overlaps. If
no overlaps occur then the surface is scan converted as before and the process repeats with the next surface
• When there is depth overlap, we make the following tests: The bounding rectangles for the two surfaces do no overlap, Surface S is
completely behind the overlapping surface relative to the viewing position, The overlapping surface is completely in front of S
realtive to the viewing position, The boundary edge projections of the two surfaces onto the view plane do not overlap
Other Techniques
• There are a number of other techniques all based around are division: BSP-Tree Method, Area-Subdivision Method, Octree Methods.
Ray casting can also be used
• When few surfaces are present either the depth sorting algorithm or the BSP tree method tend to perform best. Scan-line also
performs well in these situations – up to a several thousand polygon surfaces. The depth buffer method tends to scale linearly, so
that for low numbers of polygons its performance is poor, but it is used for higher numbers of polygons.
• A point source is the simplest model we can use for a light source. We simply define: The position of the light and The RGB values for
the colour of the light. Light is emitted in all directions. Useful for small light sources
• As light moves from a light source its intensity diminished. At any distance dl away from the light source the intensity diminishes by a
factor of. However, using the factor does not produce very good results so we use something different. We use instead in
inverse quadratic function of the form:
1
f radatten(dl )
a0 a1dl a2 dl
2
where the coefficients a0, a1, and a2 can be varied to produce optimal results
• To turn a point light source into a spotlight we simply add a vector direction and an angular limit θl. We can denote Vlight as the unit
vector in the direction of the light and Vobj as the unit vector from the light source to an object. The dot-product of these two vectors
gives us the angle between them. If this angle is inside the light’s angular limit then the object is within the spotlight
• As well as light intensity decreasing as we move away from a light source, it also decreases angularly. A commonly used function for
calculating angular attenuation is: where the attenuation exponent al is assigned some positive value and angle is measured from
the cone axis
f angatten( ) cosal 0
Reflected Light
• The colours that we perceive are determined by the nature of the light reflected from an object. For example, if white light is shone
onto a green object most wavelengths are absorbed, while green light is reflected from the object
• The amount of incident light reflected by a surface depends on the type of material. Shiny materials reflect more of the incident light
and dull surfaces absorb more of the incident light. For transparent surfaces some of the light is also transmitted through the
material
Diffuse Reflection
• Surfaces that are rough or grainy tend to reflect light in all directions. This scattered light is called diffuse reflection
Specular Reflection
• Additionally to diffuse reflection some of the reflected light is concentrated into a highlight or bright spot. This is called specular
reflection
Ambient Light
• A surface that is not exposed to direct light may still be lit up by reflections from other nearby objects – ambient light. The total
reflected light from a surface is the sum of the contributions from light sources and reflected light
• We will consider a basic illumination model which gives reasonably good results and is used in most graphics systems. The important
components are: Ambient light, Diffuse reflection, Specular reflection. For the most part we will consider only monochromatic light
Ambient Light
• To incorporate background light we simply set a general brightness level for a scene. This approximates the global diffuse reflections
from various surfaces within the scene. We will denote this value as Ia
Diffuse Reflection
• First we assume that surfaces reflect incident light with equal intensity in all directions. Such surfaces are referred to as ideal diffuse
reflectors or Lambertian reflectors. A parameter kd is set for each surface that determines the fraction of incident light that is to be
scattered as diffuse reflections from that surface. This parameter is known as the diffuse-reflection coefficient or the diffuse
reflectivity
• For background lighting effects we can assume that every surface is fully illuminated by the scene’s ambient light Ia. Therefore the
ambient contribution to the diffuse reflection is given as: Ambient light alone is very uninteresting so we need some other lights in a
scene as well
Diffuse Reflection
• When a surface is illuminated by a light source, the amount of incident light depends on the orientation of the surface relative to the
light source direction. The angle between the income
• To combine the diffuse reflections arising from ambient and incident light most graphics packages use two separate diffuse-
reflection coefficients: ka for ambient light , kd for incident light. The total diffuse reflection equation for a single point source can
then be given as:
k I k I ( N L) if N L 0
I diff a a d l
ka I a if N L 0
Specular Reflection
• The bright spot that we see on a shiny surface is the result of near total of the incident light in a concentrated region around the
specular reflection angle. The specular reflection angle equals the angle of the incident light. A perfect mirror reflects light only in
the specular-reflection direction. Other objects exhibit specular reflections over a finite range of viewing positions around vector R
• The Phong specular reflection model or Phong model is an empirical model for calculating specular reflection range developed in
1973 by Phong Bui Tuong. The Phong model sets the intensity of specular reflection as proportional to the angle between the
viewing vector and the specular reflection vector. So, the specular reflection intensity is proportional to. The angle Φ can be varied
between 0° and 90° so that cosΦ varies from 1.0 to 0.0. The specular-reflection exponent, ns is determined by the type of surface
we want to display
Shiny surfaces have a very large value (>100) and Rough surfaces would have a value near 1
• For some materials the amount of specular reflection depends heavily on the angle of the incident light. Fresnel’s Laws of Reflection
describe in great detail how specular reflections behave.
• For a single light source we can combine the effects of diffuse and specular reflections simply as follows:
I I diff I spec
k aI a kd I l ( N L) k s I l (V R) ns
• We can place any number of light sources in a scene. We compute the diffuse and specular reflections as sums of the contributions
from the various sources
n
I I ambdiff I l ,diff I l , spec
l 1
n
k a I a I l k d N L k s V R s
n
l 1
• To incorporate radial and angular intensity attenuation into our model we simply adjust our equation to take these into account.
• For an RGB colour description each intensity specification is a three element vector
• So, for each light source:
I l I lR , I lG , I lB . Similarly all parameters are given as vectors:
• The simplest method for rendering a polygon surface. The same colour is assigned to all surface positions. The illumination at a
single point on the surface is calculated and used for the entire surface. Flat surface rendering is extremely fast, but can be
unrealistic
• Often also called intensity-interpolation surface rendering. Intensity levels are calculated at each vertex and interpolated across the
surface. To render a polygon, Gouraud surface rendering proceeds as follows: Determine the average unit normal vector at each
vertex of the polygon, Apply an illumination model at each polygon vertex to obtain the light intensity at that position, Linearly
interpolate the vertex intensities over the projected area of the polygon
N i
Nv i 1
N1 N 2 N3 N 4 n
Nv
N1 N 2 N3 N 4 N
i 1
i
• Gouraud surfacing rendering can be implemented relatively efficiently using an iterative approach. Typically Grouaud shading is
implemented as part of a visible surface detection technique. The major problem with Gouraud shading is in handling specular
reflections
• A more accurate interpolation based approach for rendering a polygon was developed by Phong Bui Tuong. Basically the Phong
surface rendering model (or normal-vector interpolation rendering) interpolates normal vectors instead of intensity values. To
render a polygon, Phong surface rendering proceeds as follows: Determine the average unit normal vector at each vertex of the
polygon, Linearly interpolate the vertex normals over the projected area of the polygon, Apply an illumination model at positions
along scan lines to calculate pixel intensities using the interpolated normal vectors
y4 y2 y y4 y5 y 2 y y5 y p y5 y4 y p
N4 N1 1 N2 N5 N3 3 N2 Np N4 N5
y1 y2 y1 y2 y3 y 2 y3 y 2 y4 y5 y4 y5
• Phong shading is much slower than Gouraud shading as the lighting model is revaluated so many times. However, there are fast
Phong surface rendering approaches that can be implemented iteratively. Typically Phong shading is implemented as part of a visible
surface detection technique
Basic Ray-Tracing
• Ray tracing proceeds as follows: Fire a single ray from each pixel position into the scene along the projection path. Determine which
surfaces the ray intersects and order these by distance from the pixel. The nearest surface to the pixel is the visible surface for that
pixel. Reflect a ray off the visible surface along the specular reflection angle. For transparent surfaces also send a ray through the
surface in the refraction direction. Repeat the process for these secondary rays
Terminating Ray-Tracing
• We terminate a ray-tracing path when any one of the following conditions is satisfied: The ray
intersects no surfaces. The ray intersects a light source that is not a reflecting surface. A maximum allowable number of reflections have
taken place
• The path from the intersection to the light source is known as the shadow ray. If any object intersects the shadow ray between the
surface and the light source then the surface is in shadow with respect to that source
Ray-Tracing Tree
• As the rays ricochet around the scene each intersected surface is added to a binary ray-tracing tree
• The left branches in the tree are used to represent reflection paths
• The right branches in the tree are used to represent transmission paths
• The tree’s nodes store the intensity at that surface. The tree is used to keep track of all contributions to a given pixel. After the ray-
tracing tree has been completed for a pixel the intensity contributions are accumulated. We start at the terminal nodes (bottom) of
the tree. The surface intensity at each node is attenuated by the distance from the parent surface and added to the intensity of the
parent surface. The sum of the attenuated intensities at the root node is assigned to the pixel
i
T u cos r i cos i N
R u 2u N N r r
Ppix Pprp
2
cos r 1 i
1 cos 2 i u
r P P0 su Ppix Pprp