Vous êtes sur la page 1sur 16

CAP405: COMPUTER GRAPHICS

Homework Title / No: 4 Course Code:


CAP405

Course Instructor: MR. SANJAY SOOD

DOA: 24/4/ 2010 DOS: 8/5/ 2010

Student’s Roll No:-RD3803B50 Section No. : D3803

Declaration:
I declare that this assignment is my individual work. I have not
copied from any other student’s work or from any other source
except where due acknowledgment is made explicitly in the
text, nor has any part been written for me by another person.

Student’s Signature:

SANDEEP SAINI

Evaluator’s comments:
________________________________________________________________

Marks obtained: ___________ out of ______________________

Part A

1) While clipping a polygon, it is said that Sutherland


Hodgeman is a better method than Weiler Atherton
polygon clipping algorithm. Perform clipping on a
Polygon and justify the above statement.
Clipping refers to an optimization where the computer only
draws things that might be visible to the viewer.

Clipping a polygon uses 2 methods

 Sutherland hodgeman

 Weiler Atherton clipping polygon

Sutherland hodgeman algorithm

The Sutherland–Hodgman algorithm is used for clipping


polygons. It works by extending each line of the convex
clip polygon in turn and selecting only vertices from the
subject polygon that are on the visible side.

The algorithm begins with an input list of all vertices in the


subject polygon. Next, one side of the clip polygon is
extended infinitely in both directions, and the path of the
subject polygon is traversed. Vertices from the input list are
inserted into an output list if they lie on the visible side of
the extended clip polygon line, and new vertices are added
to the output list where the subject polygon path crosses
the extended clip polygon line.
This process is repeated iteratively for each clip polygon
side, using the output list from one stage as the input list
for the next. Once all sides of the clip polygon have been
processed, the final generated list of vertices defines a new
single polygon that is entirely visible. Note that if the
subject polygon was concave at vertices outside the
clipping polygon, the new polygon may have coincident
(i.e. overlapping) edges – this is acceptable for rendering,
but not for other applications such as computing shadows.
All steps of clipping concave polygon 'W' by 5-sided
convex polygon
The Weiler–Atherton algorithm overcomes this by
returning a set of divided polygons, but is more complex
and computationally more expensive, so Sutherland–
Hodgman is used for many rendering applications.
Sutherland–Hodgman can also be extended into 3D space
by clipping the polygon paths based on the boundaries of
planes defined by the viewing space.

Weiler Atherton clipping algorithm

The Weiler-Atherton algorithm is capable of clipping a


concave polygon with interior holes to the boundaries of
another concave polygon, also with interior holes. The
polygon to be clipped is called the subject polygon (SP) and
the clipping region is called the clip polygon (CP). The new
boundaries created by clipping the SP against the CP are
identical to portions of the CP. No new edges are created.
Hence, the number of resulting polygons is minimized.
The algorithm describes both the SP and the CP by a
circular list of vertices. The exterior boundaries of the
polygons are described clockwise, and the interior
boundaries or holes are described counter-clockwise. When
traversing the vertex list, this convention ensures that the
inside of the polygon is always to the right. The boundaries
of the SP and the CP may or may not intersect. If they
intersect, the intersections occur in pairs. One of the
intersections occurs when the SP edge enters the inside of
the CP and one when it leaves. Fundamentally, the
algorithm starts at an entering intersection and follows the
exterior boundary of the SP clockwise until an intersection
with a CP is found. At the intersection a right turn is made,
and the exterior of the CP is followed clockwise until an
intersection with the SP is found. Again, at the intersection,
a right turn is made, with the SP now being followed. The
process is continued until the starting point is reached.
Interior boundaries of the SP are followed counter-
clockwise.

A more formal statement of the algorithm is

• Determine the intersections of the subject and clip


polygons - Add each intersection to the SP and CP vertex
lists. Tag each intersection vertex and establish a
bidirectional link between the SP and CP lists for each
intersection vertex.
• Process nonintersecting polygon borders - Establish two
holding lists: one for boundaries which lie inside the CP
and one for boundaries which lie outside. Ignore CP
boundaries which are outside the SP. CP boundaries inside
the SP form holes in the SP. Consequently. a copy of the
CP boundary goes on both the inside and the outside
holding list. Place the boundaries on the appropriate
holding list.
• Create two intersection vertex lists - One, the entering list,
contains only the intersections for the SP edge entering
the inside of the CP. The other, the leaving list, contains
only the intersections for the SP edge leaving the inside of
the CP. The intersection type will alternate along the
boundary. Thus, only one determination is required for
each pair of intersections.
• Perform the actual clipping -
Polygons inside the CP are found using the following
procedure.
o Remove an intersection vertex from the entering list.
If the list is empty, the process is complete.
o Follow the SP vertex list until an intersection is found.
Copy the SP list upto this point to the inside holding
list.
o Using the link, jump to the CP vertex list.
o Follow the CP vertex list until an intersection is found.
Copy the CP vertex list upto this point to the inside
holding list.
o Jump back to the SP vertex list.
o Repeat until the starting point is again reached. At
this point, the new inside polygon has been closed.

Polygons outside the CP are found using the same


procedure, except that the initial intersection vertex is
obtained from the leaving list and the CP vertex list is
followed in the reverse direction. The polygon lists are
copied to the outside holding list.

2) Write a procedure for area-Subdivision algorithm


for visible surface.

The area-subdivision method takes advantage of area


coherence in a scene by locating those view areas that
represent part of a single surface.
The total viewing area is successively divided into smaller
and smaller rectangles until each small area is simple, ie. it
is a single pixel, or is covered wholly by a part of a single
visible surface or no surface at all.
The procedure to determine whether we should subdivide
an area into smaller rectangle is:
1. We first classify each of the surfaces, according to their
relations with the area:
Surrounding surface - a single surface completely encloses
the area
Overlapping surface - a single surface that is partly inside
and partly outside the area
Inside surface - a single surface that is completely inside
the area
Outside surface - a single surface that is completely
outside the area.
To improve the speed of classification, we can make use of
the bounding rectangles of surfaces for early confirmation
or rejection that the surfaces should be belong to that
type.
2. Check the result from 1., that, if any of the following
condition is true, then, no subdivision of this area is
needed.
a. All surfaces are outside the area.
b. Only one surface is inside, overlapping or surrounding
surface is in the area.
c. A surrounding surface obscures all other surfaces within
the area boundaries.

For cases b and c, the color of the area can be determined


from that single surface.

3) Write a program that allows a user to design a


picture from a menu of basic shapes by dragging
each selected shape into position with a pick device.

#include <dos.h>

#include <graphics.h>

#include<stdio.h>

#include<conio.h>

#include<iostream.h>

union REGS in, out;

int cirrad1=0,cirrad2;

void detectmouse ( )

in.x.ax = 0;

int86 (0X33,&in,&out);
if (out.x.ax == 0)

printf ("\nMouse Fail To Initialize");

else

printf ("\nMouse Succesfully Initialize");

void showmousetext ( )

in.x.ax = 1;

int86 (0X33,&in,&out);

void showmousegraphics ( )

in.x.ax = 1;

int86 (0X33,&in,&out);

getch ();

closegraph ();

void hidemouse ( )

in.x.ax = 2;

int86 (0X33,&in,&out);

void draw ()
{ while(out.x.bx!=2)

int x,y,x1,y1;

in.x.ax = 3;

int86 (0X33,&in,&out);

cleardevice();

if (out.x.bx == 1)

x = out.x.cx;

y = out.x.dx;

setcolor(10);

circle(x,y,cirrad1);

if (out.x.bx == 1)

x = out.x.cx;

y = out.x.dx;

setcolor(10);

circle(x,y,cirrad2);

if (out.x.bx == 1)

x = out.x.cx;
y = out.x.dx;

//setcolor(10);

// circle(x,y,cirrad2);

if (out.x.bx == 1)

x1 = out.x.cx;

y1 = out.x.dx;

line(x,y,x+34,y+23);

line(x,y,x-90,y-0);

delay (10);

getch( );

int main ( )

cout<<"There will be 2 circle followes by a rectangle and


then a line";

cout<<"\nEnter the radius of the two circle ";

cin>>cirrad1;

cin>>cirrad2;

clrscr( );

int gdriver = DETECT, gmode, errorcode;


initgraph(&gdriver, &gmode, "d:\\tc\\bgi");

detectmouse ( );

showmousetext ( );

draw ( );

hidemouse ( );

getch ( );

return 0;

Part B

4) Design the scan-line algorithm for the removal of


hidden lines from a scene.

Scanline algorithms have a variety of applications in


computer graphics and related fields. These notes contain
some details and hints concerning the programming
assignments relating to scan conversion of polygons,
hidden feature elimination, and shaded rendering. The
most important thing to keep in mind when implementing
an algorithm at the pixel level is to have a clear
paradigm how geometry specified in floating-point
precision is mapped into integer valued pixels - and to
stick to that model religiously !

It is assumed for both the polygonal case and the


parametric curve case that the objects to be drawn have
been transformed to a screen space with X going to the
right, Y going up and Z going into the screen. Furthermore,
the perspective transformation is assumed to have been
performed on all objects so that an orthographic projection
of X and Y onto the screen is appropriate. In the case of
parametric curved surfaces this serves to alter the form of
the functions somewhat but the processing performed
upon those functions remains the same. A scan line
algorithm basically consists of two nested loops, one for
the Y coordinate going down the screen and one for the X
coordinate going across each scan line of the current Y. For
each execution of the Y loop, a plane is defined by the
eyepoint and the scan line on the screen. All objects to be
drawn are intersected with this plane. The result is a set of
line segments in XZ, one (or more) for each potentially
visible polygon on that scan line. These line segments are
then processed by the X scan loop. For each execution of
this loop a scan ray is defined by the eyepoint and a
picture element on the screen. All segments are
intersected with this ray to yield a set of points, one for
each potentially visible polygon at that picture element.
These points are then sorted by their Z position. The point
with the smallest Z is deemed visible and an intensity is
computed from it. The processing during the X scan is,
then, fundamentally the same as the processing during the
Y scan except for the change in dimensionality. During the
Y scan, 3D Polygons are intersected with a plane to
produce 2D line segments. During the X scan, 2D line
segments are intersected with a line to produce ID points.

For each scan line:


1. Find the intersections of the scan line with all edges of
the polygon.
2. Sort the intersections by increasing x coordinate.
3. Fill in all pixels between pairs of intersections.
Problem:
Calculating intersections is slow.
Solution:
Incremental computation / coherence

5) Suppose you are given an image. How will you


detect the presence of Hidden surfaces and remove
hindrance from the image?
we can detect the presence of hidden surfaces with the
help of visible surface detection. There are two approaches
called object space method and image space method. An
object space method compares objects and parts of
objects to each other within the scene definition to
determine which surfaces,as a whole,we should label as
visible.

In an image space algorithm,visibility is decided point by


point at each pixel position on the projection plane.most
visible surface algorithm use image space method
,although the object space method can be used effectively
to locate visible surfaces in some cases. line display
algorithms, on the other hand, generally used object space
method to identify visible lines in wire frame display,but
many image space visible surface algorithm can be
adapted easily to visible line detection.

Hidden surface removal algorithms: -

Considering the rendering pipeline, the projection, the


clipping, and the rasterization steps are handled
differently by the following algorithms:

1. Z-buffering: - During rasterization the depth/Z


value of each pixel is checked against an existing
depth value. If the current pixel is behind the pixel in
the Z-buffer, the pixel is rejected, otherwise it is
shaded and its depth value replaces the one in the Z-
buffer. Z-buffering supports dynamic scenes easily,
and is currently implemented efficiently in graphics
hardware. This is the current standard. The cost of
using Z-buffering is that it uses up to 4 bytes per
pixel, and that the rasterization algorithm needs to
check each rasterized sample against the z-buffer.
The z-buffer can also suffer from artefacts due to
precision errors although this is far less common now
that commodity hardware supports 24-bit and higher
precision buffers.
2. Ray tracing: - attempts to model the path of
light rays to a viewpoint by tracing rays from the
viewpoint into the scene. Although not a hidden
surface removal algorithm as such, it implicitly solves
the hidden surface removal problem by finding the
nearest surface along each view-ray. Effectively this
is equivalent to sorting all the geometry on a per
pixel basis.

3. The Warnock algorithm: - divides the screen into


smaller areas and sorts triangles within these. If
there is ambiguity (i.e., polygons overlap in depth
extent within these areas), then further subdivision
occurs. At the limit, subdivision may occur down to
the pixel level.

4. Coverage buffers and Surface buffer: - faster


than z-buffers and commonly used in games in the
Quake I era. Instead of storing the Z value per pixel,
they store list of already displayed segments per line
of the screen. New polygons are then cut against
already displayed segments that would hide them. A
S-Buffer can display unsorted polygons, while a C-
Buffer require polygons to be displayed from the
nearest to the furthest. C-buffer having no
overdrawn, they will make the rendering a bit faster.
They were commonly used with BSP trees which
would give the polygon sorting.

5. Painter's algorithm: - sorts polygons by their


barycentre and draws them back to front. This
produces few artefacts when applied to scenes with
polygons of similar size forming smooth meshes and
back face culling turned on. The cost here is the
sorting step and the fact that visual artefacts can
occur.

6. Binary space partitioning: - (BSP) divides a


scene along planes corresponding to polygon
boundaries. The subdivision is constructed in such a
way as to provide an unambiguous depth ordering
from any point in the scene when the BSP tree is
traversed. The disadvantage here is that the BSP tree
is created with an expensive pre-process. This means
that it is less suitable for scenes consisting of
dynamic geometry. The advantage is that the data is
pre-sorted and error free, ready for the previously
mentioned algorithms. Note that the BSP is not a
solution to HSR, only a help.

6) Is Z-buffer better than other hidden surface


algorithm? Give reasons.

z-buffering is the mana of image depth coordinates in


three-dimensional (3-D) graphics, usually done
in hardware, sometimes insoftware. It is one solution to
the visibility problem, which is the problem of deciding
which elements of a rendered scene are visible, and which
are hidden. The painter's algorithm is another common
solution which, though less efficient, can also handle non-
opaque scene elements. Z-buffering is also known as depth
buffering.
When an object is rendered by a 3D graphics card, the
depth of a generated pixel (z coordinate) is stored in
a buffer (the z-buffer or depth buffer). This buffer is usually
arranged as a two-dimensional array (x-y) with one
element for each screen pixel. If another object of the
scene must be rendered in the same pixel, the graphics
card compares the two depths and chooses the one closer
to the observer. The chosen depth is then saved to the z-
buffer, replacing the old one. In the end, the z-buffer will
allow the graphics card to correctly reproduce the usual
depth perception: a close object hides a farther one. This is
called z-culling.
The granularity of a z-buffer has a great influence on the
scene quality: a 16-bit z-buffer can result
in artifacts (called "z-fighting") when two objects are very
close to each other. A 24-bit or 32-bit z-buffer behaves
much better, although the problem cannot be entirely
eliminated without additional algorithms. An 8-bit z-buffer
is almost never used since it has too little precision.
Z-buffer data in the area of video editing permits one to
combine 2D video elements in 3D space, permitting virtual
sets, "ghostly passing through wall" effects, and complex
effects like mapping of video on surfaces. An application
for Maya, called IPR, permits one to perform post-rendering
texturing on objects, utilizing multiple buffers like z-
buffers, alpha, object id, UV coordinates and any data
deemed as useful to the post-production process, saving
time otherwise wasted in re-rendering of the video.

Z-buffer data obtained from rendering a surface from a


light's POV permits the creation of shadows in a scanline
renderer, by projecting the z-buffer data onto the ground
and affected surfaces below the object. This is the same
process used in non-raytracing modes by the free and
open sourced 3D application Blender.

Z Buffer Algorithm

1. Clear the color buffer to the background color

2. Initialize all xy coordinates in the Z buffer to one

3. For each fragment of each surface, compare depth

values to those already stored in the Z buffer

- Calculate the distance from the projection plane

for each xy position on the surface


- If the distance is less than the value currently

stored in the Z buffer:

Set the corresponding position in the color buffer to the color of


the fragment

Set the value in the Z buffer to the distance to that object

- Otherwise:

Leave the color and Z buffers unchanged

The reason why z-buffer is better than other hidden surface


algorithm

- Z-buffer testing can increase application performance

- Software buffers are much slower than specialized hardware


depth buffers

- The number of bitplanes associated with the Z buffer


determine its precision or resolution

Vous aimerez peut-être aussi